From joly at punkcast.com Fri Mar 8 08:28:26 2019 From: joly at punkcast.com (Joly MacFie) Date: Fri, 8 Mar 2019 11:28:26 -0500 Subject: [ih] Fwd: Internet Histories Early Career Researcher Award In-Reply-To: <4D6C65DA-2E9E-4DA3-BD6B-94C453D6F471@nyu.edu> References: <68C532C1-63DC-4F38-9286-54C78B8C3DE0@cc.au.dk> <4D6C65DA-2E9E-4DA3-BD6B-94C453D6F471@nyu.edu> Message-ID: >From my inbox! ---------- Forwarded message --------- Are you conducting groundbreaking research in the field of Internet or Web history? Then the newly established 'Internet Histories Early Career Researcher Award' may be something for you. We invite any interested early career researchers (masters students, doctoral students, and post-doctoral researchers) to send us an original article, between 6,000 and 8,000 words, by 15 October 2019. All selected articles will be published in a special issue of the journal Internet Histories in the second half of 2020 and also automatically be nominated for the ?Internet Histories Early Career Researcher? Award, which carries a prize of 500 euros. The jury of this Award is composed of the following members of the international Editorial Board of Internet Histories: - Janet Abbate, Virginia Polytechnic Institute and State University, USA - Kevin Driscoll, University of Virginia, USA - Greg Elmer, Ryerson University, Canada - Benjamin Thierry, Paris-Sorbonne University, France - Jane Winters, School of Advanced Study, University of London, UK Please have a look at the call at https://think.taylorandfrancis.com/internet-histories-si-early-career-researcher-award/ -- -- --------------------------------------------------------------- Joly MacFie 218 565 9365 Skype:punkcast -------------------------------------------------------------- - -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Internet_Histories_Early_Career_Researcher_Award.pdf Type: application/pdf Size: 119104 bytes Desc: not available URL: From karl at cavebear.com Sun Mar 10 16:20:57 2019 From: karl at cavebear.com (Karl Auerbach) Date: Sun, 10 Mar 2019 16:20:57 -0700 Subject: [ih] Hello to the list, plus some comments Message-ID: Hello - I had not heard about the existence of this list until recently. So I'll begin with some background: I was a student at UCLA from 1967.? I had a student job with the folks who were doing automobile traffic studies (tracking how cars existed freeways, crashing cars, etc).? I learned programming on the IBM 7094 - which was in the room right next to the Sigma Seven where IMP #1 lived. I was part of the UCLA Computer Club.? We were a troublesome group (don't ask what we did to a non-responsive ice cream machine in the hallway outside of the computer club office.)? And one thing that folks like Mark Kampe and others did was to drop things off the top of Boelter Hall to see what would happen.? We dropped things like super balls (at room temperature and also frozen in liquid nitrogen), an empty lead radiation container, etc.? IMP #1 was just so very attractive in that regard - it looked like an armored refrigerator with a hook on the top, so we just were so very tempted to see how rugged it was.? Good thing we did not give into that temptation. I began my actual non-academic work on some projects dealing with satellites and other stuff - go see the movie War Games or Dr. Strangelove.? We built that stuff for real.? One of my lesser moments was when I instructed the operators of a 60 meter dish in the middle of Australia to point the thing directly towards the ground.? But I did gain a lot of real-life experience about the issues of receiving and handling a relentless stream of network data (from satellites.) Anyway, later on - around 1971 I began working in the network and operating system research group at System Development Corporation (SDC) in Santa Monica.? I worked with Dave Kaufman and, later, Frank Heinrich (who came from Dave Farber's DCS, Distributed Computer System, project at UC Irvine.)? By-the-way, I am of the opinion that DCS deserves a far more prominent place in the history of the internet, or perhaps in the history of cloud computing, than it has received. (It was during my time at SDC that I got the chance to do a small amount of work with Donald Davies - we were all working in a research site located on the top floors of the Stoke Poges golf club - for those of you who don't know, that was the venue where Goldfinger and James Bond played a round of golf and Odd Job used his hat to slice the head off of a marble statue.) Since then I've worked on lots of stuff ranging from Unix/Linux kernels to early email (pre-sendmail), ATM (banking) networks, protocols (most particularly SNMP and some "I wish we had done better" work on netbios-over-TCP, RFC 1001/1002 - yes, I sort of deserved it when Paul Mokapetris glowered at me (from the top of a table) and declared that I had destroyed DNS. ;-)? I've also done work with network video (a period in which Steve Casner taught me how much I didn't know) and IP multicast.? I've also spent a great deal of time doing interoperability testing - both at the Interop shows (I was part of the design team from nearly the start) and at the old TCP/IP bakeoff events. But my largest interest is in the question of repair of the net. My grandfather was a radio repairman, my father had a shop that repaired TV's that other guys couldn't repair.? So fixing things is kinda genetic.? Back in the early 1990's I formed a company to build the first internet buttset - a tool for people on cold floors in wiring closets or up on poles in the rain who needed to get busy diagnosing and repairing within a few seconds.? That tool was quite impressive, well received, and very useful but I did not then know how to run a company and that tool, and the product vanished (pieces remain in products from companies such as Fluke.)? One thing that I have observed about the development of the internet, as compared to how old Ma Bell's approached the telephone network, is the lack of formal and mandated test and loopback hooks in internet protocols. However, I have remained intrigued by the notion of homeostatic networking - how the internet can be made not just more readily diagnosed and repaired but also more self healing.? The internet is being perceived as something approaching a lifeline grade utility and it's been my feeling that this ought to drive a change in the way we engineer the net.? But that's a huge topic for another time but to plant a hook, I'll just suggest that perhaps we can learn a lot of network tricks by looking at how plants and animals enhance their ability to survive change. One set of tools that I've produced is Jon Postel's notion of a flakeway - in his view a router that intentionally did things wrong.? I've extended that notion to have tools for developers (I emphasize that these tools are to help developers build more robust code, not to develop attacks) that can statefully (or not) do things ranging from dropping/duplicating packets to coercing a TCP intial three way exchange into a four packet exchange (by splitting the typically merged middle SYN+FIN packet) or adding/removing options ... and beyond. You all know about my ICANN adventures, to avoid going into the weeds, let's not talk about that at the moment.? ;-) Of late I have been bemoaning the slow erosion of the end-to-end principle.? I wrote a large (26 page) blog entry about how I see the present-era internet evolving into something reminiscent of the political landscape of 15th century Europe, an internet composed of isolated islands that are connected by highly protected bridges. https://www.cavebear.com/cavebear-blog/internet_quo_vadis/ Oh yeah, I'm also an attorney, recipient of the Norbert Wiener Award for Social and Professional Responsibility, was named a fellow of law and technology at Caltech and Loyola Law (Los Angeles), and was understudy crocodile in a production of Peter Pan. Now some comments: One of our first projects at SDC (circa 1972) was for the US Joint Chiefs of Staff - a group with a decidedly military point of view - regarding the surviveability of packet switching networks, most particularly networks derived from ARPAnet ideas.? It was very explicitly part of our project to wonder about the impact of nuclear war - we quite openly spoke of the impact of "gateways" (routers) and links being vaporized. However, our work was done under a layer of US and UK secrecy - it was often classified or, if not, it was considered sensitive information.? As a consequence few people ever heard of our work. (Although eventually our work created the first protected VPNs, first working operating systems written to and formally validated against formal models of security, capability based operating systems, key distribution systems, etc.) By-the-way, my wife and I, having no experience in making videos or sound recordings, much less in documentary film making, but both of us with a bit of background in theatre, set forth a few years back to gather interviews about the internet from about 1965 through 1995 and create a series of short (5 minute) videos about the creation of the internet.? (We plan about 200 episodes.) We've only released a very few - and our lack of skill shows through, but we are improving.? For various reasons we had to lay the project aside a few years back, but we intend to resume with our interviewing.? (A typical interview runs for a couple of hours.)? You can see our series trailer at https://history-of-the-internet.org/videos/trailer/ ??? ??? --karl-- From karl at cavebear.com Sun Mar 10 17:56:27 2019 From: karl at cavebear.com (Karl Auerbach) Date: Sun, 10 Mar 2019 17:56:27 -0700 Subject: [ih] Internet or internet i In-Reply-To: References: Message-ID: <10c819bf-4a5b-e36c-e28a-7657c582ae80@cavebear.com> It is largely a matter of personal whim.? (I tend to be somewhat of an iconoclast.)? I tend to think of the issue of capitalization the same way that Gulliver viewed the dispute between Lilliput and Blefuscu over which end of an egg was the proper end be cracked open. I tend to still use the singular "the" before "internet" but even from "the beginning' I worked with ARPAnet near-clones that were separated from the more visible networks and never really felt a need to elevate one or the other to a level that suggested that there were no others.? (That notion of separation and non-uniqueness was accentuated by my contact with military networks that were nearly 100% the same technology as the then nascent internet but were highly isolated.) I would note, however, that in my note on where the internet might be going (https://www.cavebear.com/cavebear-blog/internet_quo_vadis/) I posited another crank of the evolution of the net (which began pretty much as one network, then became a network of networks) to be a network of internets (or a network of networks of networks.) This fracturing of what once could deserve the capital 'I' of Internet has me greatly concerned - I am a fan of the end to end principle - but also reconciled to the fact that social and economic forces often overcome technical elegance. ??? --karl-- On 3/10/19 5:30 PM, Joly MacFie wrote: > > On Sun, Mar 10, 2019 at 7:20 PM Karl Auerbach > wrote: > > ?the internet > > > Hi Karl > With due respect, and hoping not to cause a ruckus, may I ask how you > came to drop the capitalization of Internet, or did you never use it? > > joly > > > -- > --------------------------------------------------------------- > Joly MacFie 218 565 9365 Skype:punkcast > --------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.e.carpenter at gmail.com Sun Mar 10 18:10:38 2019 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 11 Mar 2019 14:10:38 +1300 Subject: [ih] Hello to the list, plus some comments In-Reply-To: References: Message-ID: <53af6667-df2e-b482-f4b7-e786e2753a85@gmail.com> > You all know about my ICANN adventures, to avoid going into the weeds, > let's not talk about that at the moment. ;-) But let's! I think a non-revisionist history of the IAHC and how we got to ICANN, *written by a participant*, is very much needed. (I just re-read www.ntia.doc.gov/ntiahome/domainname/130dftmail/auerbach.pdf) Regards Brian Carpenter From brian.e.carpenter at gmail.com Sun Mar 10 19:06:59 2019 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 11 Mar 2019 15:06:59 +1300 Subject: [ih] Hello to the list, plus some comments In-Reply-To: <60151e1e-85aa-c3ba-8058-d168967bf1ad@gmail.com> References: <53af6667-df2e-b482-f4b7-e786e2753a85@gmail.com> <60151e1e-85aa-c3ba-8058-d168967bf1ad@gmail.com> Message-ID: On 11-Mar-19 14:46, Dave Crocker wrote: > On 3/10/2019 6:10 PM, Brian E Carpenter wrote: >> But let's! I think a non-revisionist history of the IAHC and how we >> got to ICANN,*written by a participant*, is very much needed. > > > Karl was part of the IAHC? When? No, I didn't mean to imply that, but he was very much part of the public discussion. I think I can find the original IAHC announcement email somewhere. Brian From brian.e.carpenter at gmail.com Sun Mar 10 19:14:54 2019 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 11 Mar 2019 15:14:54 +1300 Subject: [ih] Fwd: IAHC Members Announced In-Reply-To: <9611121450.aa03214@linus.isoc.org> References: <9611121450.aa03214@linus.isoc.org> Message-ID: <9b94312a-a388-1609-f4b1-d4a87923426c@gmail.com> As mentioned: -------- Forwarded Message -------- Subject: IAHC Members Announced Date: Tue, 12 Nov 96 14:50:40 EST From: major at linus.isoc.org To: members.2 at linus.isoc.org Dear Members: Here is the press release announcing the names of the IAHC. You will notice, at the end of the release, that there is information to allow you to track and/or participate - which I do encourage. Thank you for your continued support; this is an important, and complicated effort - do keep tuned to the process and its results. Best regards, Don Contact: Internet Society 12020 Sunrise Valley Drive Reston, VA 20191-3429 TEL 703-648-9888 FAX 703-648-9887 E-mail info at isoc.org http://www.isoc.org NEW INTERNATIONAL COMMITTEE NAMED TO STUDY DOMAIN NAME SYSTEM ISSUES WASHINGTON, DC, November 12, 1996 -- An Internet International Ad Hoc Committee (IAHC) has been named to resolve issues resulting from current international debate over a proposal to establish additional global registries and international Top Level Domain (iTLDs). "We are pleased to have attracted such a high level of leading international experts in their fields to examine these questions that are critical to the current and future growth of the Internet," Donald M. Heath, president and CEO of the Internet Society said in announcing the eleven-member committee. Heath will serve as chairman. Deliberations of the committee may lead to the establishment of new international Top Level Domains (iTLDs), adding to the current three-letter tags, such as .com, .net, and .org, that end many Internet email and World Wide Web addresses. Dr. Donald N. Telage, president of the Herndon, Virginia - based Network Solutions, Inc., which manages the InterNIC Registry administering the .com, .net, .edu, and .org top level domains, said: "Network Solutions has supported the registration process and the growth of the Internet since 1991. We have seen its evolution from a research and education tool to a powerful medium for global communication and collaboration. The National Science Foundation has played a critical role in the early governance activities, and we support the Internet Society's efforts to review issues critical to the future of Internet growth, evolution and governance. Network Solutions will participate and support this effort enthusiastically supplying our extensive operational knowledge as needed." Named to the new IAHC are: .Sally M. Abel, specializes in international trademark and trade name counseling, chairs the Internet Subcommittee of the International Trademark Association (INTA), and will represent that organization on the IAHC. Ms. Abel is the partner in charge of the Trademark Group of the law firm of Fenwick and West, a Palo Alto, Ca. firm specializing in high technology matters. .Dave Crocker, is co-founder of the Internet Mail Consortium, an industry trade association. He is also a principal with Brandenburg Consulting in Sunnyvale, Ca., a firm specializing in guiding the development and use of Internet applications. With ten years in the ARPA research community, ten years developing commercial network products and services, and extensive contributions to the Internet Engineering Task Force, he is considered an expert about the Internet, e-mail, electronic commerce, Internet operation and the Internet standards process. .Geoff Huston is the technical manager of Australia's Telstra Internet and is responsible for the architecture and operations of its service. He formerly was technical manager of the Australian Academic and Research Network, and was largely responsible for the introduction and subsequent development of the Internet into Australia. .David W. Maher, is a partner at the law firm of Sonnenschein Nath & Rosenthal, of Chicago, IL, is a registered patent attorney and has extensive experience in intellectual property and entertainment law. Principal outside trademark counsel for several nationwide companies, he has served as special counsel to the American Bar Association for telecommunications matters. .Perry E. Metzger is the president of New York - based Piermont Information Systems Inc., a consulting firm specializing in communications and computer systems security. He has worked with the New York financial community for many years and is active in the Internet Engineering Task Force's (IETF) security area, chairing the group's Simple Public Key Infrastructure working group. .Jun Murai is associate professor of Faculty of Environmental Information at Keio University in Tokyo. He developed JUNET, Japan's first UUCP network and the WIDE Internet, Japan's first IP network. He is president of the Japan Network Information Center (JPNIC) and serves as adjunct professor at the Institute of Advanced Studies of the United Nations University in Tokyo. .Hank Nussbacher, is an independent networking consultant, currently works with IBM Israel as Internet Technology Manager and has been responsible for all aspects in establishing IBM Israel as a major ISP in Israel. He also consults to the Israeli inter-university consortium and is on the board of directors of the Internet Society of Israel. .Robert Shaw is an advisor on Global Information Infrastructure (GII) issues at the International Telecommunication Union (ITU). The ITU, based in Geneva, Switzerland, is a United Nations treaty organization within which governments and the private sector coordinate global telecom networks and services. .George Strawn is with the US National Science Foundation (NSF), which has funded Internet development for research and education. Mr. Strawn has been involved with the NSF's Internet activities for the last five years and also co-chairs the Federal Networking Council, a US government committee coordinating inter-agency Internet activities, including funding for administrative activities, such as the Internet Assigned Numbers Authority (IANA). .Albert Tramposch is senior legal counsellor at the World Intellectual Property Organization (WIPO) in Geneva. WIPO is a United Nations organization which has responsibility for the promotion of the protection of intellectual property throughout the world. It also administers various treaties dealing with legal and administrative aspects of intellectual property, including the international registration of trademarks. In addition, Stuart Levi, a partner in the New York Office of Skadden, Arps, Slate, Meagher & Flom, and the head of the firm's Computer and Information Technology Practice, will serve as outside counsel supporting the IAHC. "The IAHC will be charged with fairly and openly looking at the complex issues surrounding the current domain name and registry situation, including trademark and infringement, economics and administration of registry operations, dispute policies, fees and iTLDs," Heath said. He anticipates the Committee reaching reasonable consensus on issues surfaced, sometime in January. A subset of the IAHC will seek to implement its recommendations very shortly after that. To meet its aggressive schedule, the widely dispersed group will primarily operate online, over the Internet. Interested parties throughout the Internet world will be able to participate in the IAHC's process, through an electronic mail list service and a Web site that are being established. Discussions, evaluations and decisions will be available for public inspection. An archive, and relevant documents, will be available public comment at the Web site which will be established by November 15 at http://www.iahc.org. To subscribe to the IAHC's email list service, send email with the word "subscribe" to: iahc-discuss-request at iahc.org. # # # # # # # From scott.brim at gmail.com Sun Mar 10 21:01:09 2019 From: scott.brim at gmail.com (Scott Brim) Date: Mon, 11 Mar 2019 00:01:09 -0400 Subject: [ih] Fwd: IAHC Members Announced In-Reply-To: <9b94312a-a388-1609-f4b1-d4a87923426c@gmail.com> References: <9611121450.aa03214@linus.isoc.org> <9b94312a-a388-1609-f4b1-d4a87923426c@gmail.com> Message-ID: They're buying iahc.org? On Sun, Mar 10, 2019 at 10:38 PM Brian E Carpenter < brian.e.carpenter at gmail.com> wrote: > As mentioned: > > -------- Forwarded Message -------- > Subject: IAHC Members Announced > Date: Tue, 12 Nov 96 14:50:40 EST > From: major at linus.isoc.org > To: members.2 at linus.isoc.org > > Dear Members: > > Here is the press release announcing the names of the IAHC. You > will notice, at the end of the release, that there is information to > allow you to track and/or participate - which I do encourage. > > Thank you for your continued support; this is an important, and > complicated effort - do keep tuned to the process and its results. > > Best regards, > > Don > > Contact: > Internet Society > 12020 Sunrise Valley Drive > Reston, VA 20191-3429 > TEL 703-648-9888 > FAX 703-648-9887 > E-mail info at isoc.org > http://www.isoc.org > > > NEW INTERNATIONAL COMMITTEE NAMED > TO STUDY DOMAIN NAME SYSTEM ISSUES > > WASHINGTON, DC, November 12, 1996 -- An Internet International > Ad Hoc Committee (IAHC) has been named to resolve issues resulting > from current international debate over a proposal to establish additional > global registries and international Top Level Domain (iTLDs). > > "We are pleased to have attracted such a high level of > leading > international experts in their fields to examine these questions that are > critical to the current and future growth of the Internet," Donald M. > Heath, > president and CEO of the Internet Society said in announcing the > eleven-member committee. Heath will serve as chairman. > > Deliberations of the committee may lead to the establishment > of new international Top Level Domains (iTLDs), adding to the current > three-letter tags, such as .com, .net, and .org, that end many Internet > email > and World Wide Web addresses. > > Dr. Donald N. Telage, president of the Herndon, Virginia - > based > Network Solutions, Inc., which manages the InterNIC Registry administering > the .com, .net, .edu, and .org top level domains, said: "Network Solutions > has > supported the registration process and the growth of the Internet since > 1991. > We have seen its evolution from a research and education tool to a powerful > medium for global communication and collaboration. The National Science > Foundation has played a critical role in the early governance activities, > and > we support the Internet Society's efforts to review issues critical to the > future of Internet growth, evolution and governance. Network Solutions > will > participate and support this effort enthusiastically supplying our > extensive > operational knowledge as needed." > > Named to the new IAHC are: > > .Sally M. Abel, specializes in international trademark and > trade > name counseling, chairs the Internet Subcommittee of the International > Trademark Association (INTA), and will represent that organization on > the IAHC. Ms. Abel is the partner in charge of the Trademark Group of > the law firm of Fenwick and West, a Palo Alto, Ca. firm specializing in > high technology matters. > > .Dave Crocker, is co-founder of the Internet Mail Consortium, > an > industry trade association. He is also a principal with Brandenburg > Consulting in Sunnyvale, Ca., a firm specializing in guiding the > development > and use of Internet applications. With ten years in the ARPA research > community, ten years developing commercial network products and services, > and extensive contributions to the Internet Engineering Task Force, he is > considered an expert about the Internet, e-mail, electronic commerce, > Internet > operation and the Internet standards process. > > .Geoff Huston is the technical manager of Australia's Telstra > Internet and is responsible for the architecture and operations of its > service. He formerly was technical manager of the Australian Academic and > Research Network, and was largely responsible for the introduction and > subsequent development of the Internet into Australia. > > .David W. Maher, is a partner at the law firm of Sonnenschein > Nath & Rosenthal, of Chicago, IL, is a registered patent attorney and has > extensive experience in intellectual property and entertainment law. > Principal outside trademark counsel for several nationwide companies, he > has served as special counsel to the American Bar Association for > telecommunications matters. > > .Perry E. Metzger is the president of New York - based > Piermont > Information Systems Inc., a consulting firm specializing in communications > and computer systems security. He has worked with the New York financial > community for many years and is active in the Internet Engineering Task > Force's (IETF) security area, chairing the group's Simple Public Key > Infrastructure working group. > > .Jun Murai is associate professor of Faculty of Environmental > Information at Keio University in Tokyo. He developed JUNET, Japan's first > UUCP network and the WIDE Internet, Japan's first IP network. He is > president > of the Japan Network Information Center (JPNIC) and serves as adjunct > professor > at the Institute of Advanced Studies of the United Nations University in > Tokyo. > > .Hank Nussbacher, is an independent networking consultant, > currently works with IBM Israel as Internet Technology Manager and has > been > responsible for all aspects in establishing IBM Israel as a major ISP in > Israel. > He also consults to the Israeli inter-university consortium and is on the > board > of directors of the Internet Society of Israel. > > .Robert Shaw is an advisor on Global Information > Infrastructure > (GII) issues at the International Telecommunication Union (ITU). The ITU, > based in Geneva, Switzerland, is a United Nations treaty organization > within > which governments and the private sector coordinate global telecom > networks and > services. > > .George Strawn is with the US National Science Foundation > (NSF), > which has funded Internet development for research and education. Mr. > Strawn > has been involved with the NSF's Internet activities for the last five > years and > also co-chairs the Federal Networking Council, a US government committee > coordinating inter-agency Internet activities, including funding for > administrative activities, such as the Internet Assigned Numbers Authority > (IANA). > > .Albert Tramposch is senior legal counsellor at the World > Intellectual Property Organization (WIPO) in Geneva. WIPO is a United > Nations > organization which has responsibility for the promotion of the protection > of > intellectual property throughout the world. It also administers various > treaties dealing with legal and administrative aspects of intellectual > property, > including the international registration of trademarks. > > In addition, Stuart Levi, a partner in the New York Office of > Skadden, Arps, Slate, Meagher & Flom, and the head of the firm's Computer > and > Information Technology Practice, will serve as outside counsel supporting > the IAHC. > > "The IAHC will be charged with fairly and openly looking at > the > complex issues surrounding the current domain name and registry situation, > including trademark and infringement, economics and administration of > registry > operations, dispute policies, fees and iTLDs," Heath said. He anticipates > the > Committee reaching reasonable consensus on issues surfaced, sometime in > January. > A subset of the IAHC will seek to implement its recommendations very > shortly > after that. > > To meet its aggressive schedule, the widely dispersed group > will > primarily operate online, over the Internet. Interested parties > throughout the > Internet world will be able to participate in the IAHC's process, through > an > electronic mail list service and a Web site that are being established. > Discussions, evaluations and decisions will be available for public > inspection. > An archive, and relevant documents, will be available public comment at the > Web site which will be established by November 15 at http://www.iahc.org. > To subscribe to the IAHC's email list service, send email with the word > "subscribe" to: iahc-discuss-request at iahc.org. > > # # # # # # # > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karl at cavebear.com Sun Mar 10 22:59:06 2019 From: karl at cavebear.com (Karl Auerbach) Date: Sun, 10 Mar 2019 22:59:06 -0700 Subject: [ih] Hello to the list, plus some comments In-Reply-To: <60151e1e-85aa-c3ba-8058-d168967bf1ad@gmail.com> References: <53af6667-df2e-b482-f4b7-e786e2753a85@gmail.com> <60151e1e-85aa-c3ba-8058-d168967bf1ad@gmail.com> Message-ID: I was not part of IAHC.? I was asked, but at the time I couldn't carve out the time. ??? --karl-- On 3/10/19 6:46 PM, Dave Crocker wrote: > On 3/10/2019 6:10 PM, Brian E Carpenter wrote: >> But let's! I think a non-revisionist history of the IAHC and how we >> got to ICANN,*written by a participant*, is very much needed. > > > Karl was part of the IAHC?? When? > > d/ > From agmalis at gmail.com Mon Mar 11 05:42:21 2019 From: agmalis at gmail.com (Andrew G. Malis) Date: Mon, 11 Mar 2019 08:42:21 -0400 Subject: [ih] Fwd: IAHC Members Announced In-Reply-To: References: <9611121450.aa03214@linus.isoc.org> <9b94312a-a388-1609-f4b1-d4a87923426c@gmail.com> Message-ID: Scott, Look at at the date of the forwarded email. :-) Cheers, Andy On Mon, Mar 11, 2019 at 12:12 AM Scott Brim wrote: > They're buying iahc.org? > > On Sun, Mar 10, 2019 at 10:38 PM Brian E Carpenter < > brian.e.carpenter at gmail.com> wrote: > >> As mentioned: >> >> -------- Forwarded Message -------- >> Subject: IAHC Members Announced >> Date: Tue, 12 Nov 96 14:50:40 EST >> From: major at linus.isoc.org >> To: members.2 at linus.isoc.org >> >> Dear Members: >> >> Here is the press release announcing the names of the IAHC. You >> will notice, at the end of the release, that there is information to >> allow you to track and/or participate - which I do encourage. >> >> Thank you for your continued support; this is an important, and >> complicated effort - do keep tuned to the process and its results. >> >> Best regards, >> >> Don >> >> Contact: >> Internet Society >> 12020 Sunrise Valley Drive >> Reston, VA 20191-3429 >> TEL 703-648-9888 >> FAX 703-648-9887 >> E-mail info at isoc.org >> http://www.isoc.org >> >> >> NEW INTERNATIONAL COMMITTEE NAMED >> TO STUDY DOMAIN NAME SYSTEM ISSUES >> >> WASHINGTON, DC, November 12, 1996 -- An Internet >> International >> Ad Hoc Committee (IAHC) has been named to resolve issues resulting >> from current international debate over a proposal to establish additional >> global registries and international Top Level Domain (iTLDs). >> >> "We are pleased to have attracted such a high level of >> leading >> international experts in their fields to examine these questions that are >> critical to the current and future growth of the Internet," Donald M. >> Heath, >> president and CEO of the Internet Society said in announcing the >> eleven-member committee. Heath will serve as chairman. >> >> Deliberations of the committee may lead to the establishment >> of new international Top Level Domains (iTLDs), adding to the current >> three-letter tags, such as .com, .net, and .org, that end many Internet >> email >> and World Wide Web addresses. >> >> Dr. Donald N. Telage, president of the Herndon, Virginia - >> based >> Network Solutions, Inc., which manages the InterNIC Registry administering >> the .com, .net, .edu, and .org top level domains, said: "Network >> Solutions has >> supported the registration process and the growth of the Internet since >> 1991. >> We have seen its evolution from a research and education tool to a >> powerful >> medium for global communication and collaboration. The National Science >> Foundation has played a critical role in the early governance activities, >> and >> we support the Internet Society's efforts to review issues critical to the >> future of Internet growth, evolution and governance. Network Solutions >> will >> participate and support this effort enthusiastically supplying our >> extensive >> operational knowledge as needed." >> >> Named to the new IAHC are: >> >> .Sally M. Abel, specializes in international trademark and >> trade >> name counseling, chairs the Internet Subcommittee of the International >> Trademark Association (INTA), and will represent that organization on >> the IAHC. Ms. Abel is the partner in charge of the Trademark Group of >> the law firm of Fenwick and West, a Palo Alto, Ca. firm specializing in >> high technology matters. >> >> .Dave Crocker, is co-founder of the Internet Mail >> Consortium, an >> industry trade association. He is also a principal with Brandenburg >> Consulting in Sunnyvale, Ca., a firm specializing in guiding the >> development >> and use of Internet applications. With ten years in the ARPA research >> community, ten years developing commercial network products and services, >> and extensive contributions to the Internet Engineering Task Force, he is >> considered an expert about the Internet, e-mail, electronic commerce, >> Internet >> operation and the Internet standards process. >> >> .Geoff Huston is the technical manager of Australia's Telstra >> Internet and is responsible for the architecture and operations of its >> service. He formerly was technical manager of the Australian Academic and >> Research Network, and was largely responsible for the introduction and >> subsequent development of the Internet into Australia. >> >> .David W. Maher, is a partner at the law firm of Sonnenschein >> Nath & Rosenthal, of Chicago, IL, is a registered patent attorney and has >> extensive experience in intellectual property and entertainment law. >> Principal outside trademark counsel for several nationwide companies, he >> has served as special counsel to the American Bar Association for >> telecommunications matters. >> >> .Perry E. Metzger is the president of New York - based >> Piermont >> Information Systems Inc., a consulting firm specializing in communications >> and computer systems security. He has worked with the New York financial >> community for many years and is active in the Internet Engineering Task >> Force's (IETF) security area, chairing the group's Simple Public Key >> Infrastructure working group. >> >> .Jun Murai is associate professor of Faculty of Environmental >> Information at Keio University in Tokyo. He developed JUNET, Japan's >> first >> UUCP network and the WIDE Internet, Japan's first IP network. He is >> president >> of the Japan Network Information Center (JPNIC) and serves as adjunct >> professor >> at the Institute of Advanced Studies of the United Nations University in >> Tokyo. >> >> .Hank Nussbacher, is an independent networking consultant, >> currently works with IBM Israel as Internet Technology Manager and has >> been >> responsible for all aspects in establishing IBM Israel as a major ISP in >> Israel. >> He also consults to the Israeli inter-university consortium and is on the >> board >> of directors of the Internet Society of Israel. >> >> .Robert Shaw is an advisor on Global Information >> Infrastructure >> (GII) issues at the International Telecommunication Union (ITU). The >> ITU, >> based in Geneva, Switzerland, is a United Nations treaty organization >> within >> which governments and the private sector coordinate global telecom >> networks and >> services. >> >> .George Strawn is with the US National Science Foundation >> (NSF), >> which has funded Internet development for research and education. Mr. >> Strawn >> has been involved with the NSF's Internet activities for the last five >> years and >> also co-chairs the Federal Networking Council, a US government committee >> coordinating inter-agency Internet activities, including funding for >> administrative activities, such as the Internet Assigned Numbers >> Authority >> (IANA). >> >> .Albert Tramposch is senior legal counsellor at the World >> Intellectual Property Organization (WIPO) in Geneva. WIPO is a United >> Nations >> organization which has responsibility for the promotion of the protection >> of >> intellectual property throughout the world. It also administers various >> treaties dealing with legal and administrative aspects of intellectual >> property, >> including the international registration of trademarks. >> >> In addition, Stuart Levi, a partner in the New York Office >> of >> Skadden, Arps, Slate, Meagher & Flom, and the head of the firm's Computer >> and >> Information Technology Practice, will serve as outside counsel supporting >> the IAHC. >> >> "The IAHC will be charged with fairly and openly looking at >> the >> complex issues surrounding the current domain name and registry >> situation, >> including trademark and infringement, economics and administration of >> registry >> operations, dispute policies, fees and iTLDs," Heath said. He anticipates >> the >> Committee reaching reasonable consensus on issues surfaced, sometime in >> January. >> A subset of the IAHC will seek to implement its recommendations very >> shortly >> after that. >> >> To meet its aggressive schedule, the widely dispersed group >> will >> primarily operate online, over the Internet. Interested parties >> throughout the >> Internet world will be able to participate in the IAHC's process, through >> an >> electronic mail list service and a Web site that are being established. >> Discussions, evaluations and decisions will be available for public >> inspection. >> An archive, and relevant documents, will be available public comment at >> the >> Web site which will be established by November 15 at http://www.iahc.org. >> To subscribe to the IAHC's email list service, send email with the word >> "subscribe" to: iahc-discuss-request at iahc.org. >> >> # # # # # # # >> >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. >> > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnc at mercury.lcs.mit.edu Mon Mar 11 06:36:36 2019 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 11 Mar 2019 09:36:36 -0400 (EDT) Subject: [ih] Internet or internet i Message-ID: <20190311133636.3CC5F18C089@mercury.lcs.mit.edu> > From: Karl Auerbach > It is largely a matter of personal whim. Not quite; the two words do (or were supposed to) have different meanings. > never really felt a need to elevate one or the other to a level that > suggested that there were no others. I recall (and contributed to, IIRC) the discussions which led to adoption of 'Internet'. The motivation for adopting a name for 'the' Internet was mostly practical, IIRC - we needed to be able to distinguish between the one large internet which many people were connected to, and the many smaller ones (which in those days were a lot more common, it wasn't an automatically done thing to connect one's LAN to the Internet - in fact, at that point, there weren't even products to enable one to do so - and IIRC it also pre-dated the commercial availabilty of LANs). The use of the capitalized form for the large internet struck us as appropriate since there was a lot of precedent for distinguishing a particular, significant member of a class with captital letters - 'White House' for instance. Had I known that down the line people (e.g. the AP) would see it as a matter of taste, I'd have argued for using a different word entirely. Also, I don't think that at the time, everyone bought into the vision that the collection of TCP/IP networks was going to grow into the ubiquitous Internet of today, which might have been a possible motivator for what some might see as a grandiose name. (Had people done so, the variable length addresses of IPv3 would surely not have been jettisoned.) Indeed, quite a few years later, large parts of what had by then become the IETF were still on board with the 'TCP->ISO conversion' (or whatever the jargon was, my memory of it has faded). Noel From scott.brim at gmail.com Mon Mar 11 07:01:59 2019 From: scott.brim at gmail.com (Scott Brim) Date: Mon, 11 Mar 2019 10:01:59 -0400 Subject: [ih] Fwd: IAHC Members Announced In-Reply-To: References: <9611121450.aa03214@linus.isoc.org> <9b94312a-a388-1609-f4b1-d4a87923426c@gmail.com> Message-ID: On Mon, Mar 11, 2019 at 8:42 AM Andrew G. Malis wrote: > Scott, > > Look at at the date of the forwarded email. :-) > "Never mind." I wondered what was going on, echoes of the past etc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.brim at gmail.com Mon Mar 11 07:07:58 2019 From: scott.brim at gmail.com (Scott Brim) Date: Mon, 11 Mar 2019 10:07:58 -0400 Subject: [ih] Internet or internet i In-Reply-To: <20190311133636.3CC5F18C089@mercury.lcs.mit.edu> References: <20190311133636.3CC5F18C089@mercury.lcs.mit.edu> Message-ID: Noel: all true, but it's over. It's not ours to call anymore. (The one I really can't get used to is "emails".) -------------- next part -------------- An HTML attachment was scrubbed... URL: From agmalis at gmail.com Mon Mar 11 07:16:02 2019 From: agmalis at gmail.com (Andrew G. Malis) Date: Mon, 11 Mar 2019 10:16:02 -0400 Subject: [ih] Fwd: IAHC Members Announced In-Reply-To: References: <9611121450.aa03214@linus.isoc.org> <9b94312a-a388-1609-f4b1-d4a87923426c@gmail.com> Message-ID: But now I'm really curious how many terabytes of email Brian has archived, and how he can search though it so quickly! Cheers, Andy On Mon, Mar 11, 2019 at 10:02 AM Scott Brim wrote: > On Mon, Mar 11, 2019 at 8:42 AM Andrew G. Malis wrote: > >> Scott, >> >> Look at at the date of the forwarded email. :-) >> > > "Never mind." I wondered what was going on, echoes of the past etc. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeanjour at comcast.net Mon Mar 11 07:37:09 2019 From: jeanjour at comcast.net (John Day) Date: Mon, 11 Mar 2019 10:37:09 -0400 Subject: [ih] Internet or internet i In-Reply-To: References: <20190311133636.3CC5F18C089@mercury.lcs.mit.edu> Message-ID: <9B6C74D4-155C-4C7D-88A1-CCBD564DED24@comcast.net> But I think Noel?s distinction still holds with the public internet being ?Internet? and all others or the concept in general being ?internet?. The problem is the New York Times has decided that ?Internet? is spelled ?internet?. > On Mar 11, 2019, at 10:07, Scott Brim wrote: > > Noel: all true, but it's over. It's not ours to call anymore. > > (The one I really can't get used to is "emails".) > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vint at google.com Mon Mar 11 10:27:46 2019 From: vint at google.com (Vint Cerf) Date: Mon, 11 Mar 2019 13:27:46 -0400 Subject: [ih] Internet or internet i In-Reply-To: <9B6C74D4-155C-4C7D-88A1-CCBD564DED24@comcast.net> References: <20190311133636.3CC5F18C089@mercury.lcs.mit.edu> <9B6C74D4-155C-4C7D-88A1-CCBD564DED24@comcast.net> Message-ID: I am in agreement with John D on this - also Associated Press has chosen to cease making the distinction. i suppose only engineers who understand that the "private internet" need not be part of the public one and still use the same protocols (and address space!) will be offended (I admit, I am one of them). v On Mon, Mar 11, 2019 at 11:06 AM John Day wrote: > But I think Noel?s distinction still holds with the public internet being > ?Internet? and all others or the concept in general being ?internet?. > > The problem is the New York Times has decided that ?Internet? is spelled > ?internet?. > > > > On Mar 11, 2019, at 10:07, Scott Brim wrote: > > Noel: all true, but it's over. It's not ours to call anymore. > > (The one I really can't get used to is "emails".) > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > -- New postal address: Google 1875 Explorer Street, 10th Floor Reston, VA 20190 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lpress at csudh.edu Mon Mar 11 11:32:36 2019 From: lpress at csudh.edu (Larry Press) Date: Mon, 11 Mar 2019 18:32:36 +0000 Subject: [ih] Internet or internet i In-Reply-To: <10c819bf-4a5b-e36c-e28a-7657c582ae80@cavebear.com> References: , <10c819bf-4a5b-e36c-e28a-7657c582ae80@cavebear.com> Message-ID: <1552329157164.51796@csudh.edu> ETECSA, Cuba's government monopoly ISP, counts both people with access to the national intranet and to the global Internet as Internet users. (They just started offering ubans 3G mobile access to the global Internet last December). -------------- next part -------------- An HTML attachment was scrubbed... URL: From jack at 3kitty.org Mon Mar 11 11:50:51 2019 From: jack at 3kitty.org (Jack Haverty) Date: Mon, 11 Mar 2019 11:50:51 -0700 Subject: [ih] Internet or internet i In-Reply-To: References: <20190311133636.3CC5F18C089@mercury.lcs.mit.edu> <9B6C74D4-155C-4C7D-88A1-CCBD564DED24@comcast.net> Message-ID: <1b06b982-8d17-3aad-8a6a-027e30772d0b@3kitty.org> Perhaps there are legal aspects involved?? The word "internet" is a noun, but "Internet" is a proper noun.?? Is "Internet" trademarked, or copyrighted, or protected by such legal means -- somewhere in the world? I suspect there are zillions of things like patent documents with "internet" and/or "Internet" in them, fodder for lawyers and courts to argue about for decades. You wouldn't believe how many hours I spent as an expert witness/consultant just arguing about the definitions of "program" and "reprogram", what exactly the difference was, and how the definitions changed over time and contexts. I wonder when we'll see facebooks as well as Facebook, and why not The Facebook? /Jack On 3/11/19 10:27 AM, Vint Cerf wrote: > I am in agreement with John D on this - also Associated Press has > chosen to cease making the distinction. > i suppose only engineers who understand that the "private internet" > need not be part of the public one > and still use the same protocols (and address space!) will be offended > (I admit, I am one of them). > > v > > > On Mon, Mar 11, 2019 at 11:06 AM John Day > wrote: > > But I think Noel?s distinction still holds with the public > internet being ?Internet? and all others or the concept in general > being ?internet?. > > The problem is the New York Times has decided that ?Internet? is > spelled ?internet?. > > > >> On Mar 11, 2019, at 10:07, Scott Brim > > wrote: >> >> Noel: all true, but it's over. It's not ours to call anymore.? >> >> (The one I really can't get used to is "emails".) >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for >> assistance. > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for > assistance. > > > > -- > New postal address: > Google > 1875 Explorer Street, 10th Floor > Reston, VA 20190 > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.e.carpenter at gmail.com Mon Mar 11 15:01:14 2019 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Tue, 12 Mar 2019 11:01:14 +1300 Subject: [ih] Fwd: IAHC Members Announced In-Reply-To: References: <9611121450.aa03214@linus.isoc.org> <9b94312a-a388-1609-f4b1-d4a87923426c@gmail.com> Message-ID: <977ac239-3f03-9c61-75ec-e97c6dc926b9@gmail.com> I don't really have that much, actually only 1.14GB, but I was IAB Chair at that time so I kept a fair amount of IAHC stuff. The IAB appointed a couple of the members and of course we were very concerned about the future of IANA. The search feature in Thunderbird is pretty good, but slow: I usually have to run it in background while doing something else. Regards Brian On 12-Mar-19 03:16, Andrew G. Malis wrote: > But now I'm really curious how many terabytes of email Brian has archived, and how he can search though it so quickly! > > Cheers, > Andy > > > On Mon, Mar 11, 2019 at 10:02 AM Scott Brim > wrote: > > On Mon, Mar 11, 2019 at 8:42 AM Andrew G. Malis > wrote: > > Scott, > > Look at at the date of the forwarded email. :-) > > > "Never mind."? I wondered what was going on, echoes of the past etc. > From mfidelman at meetinghouse.net Mon Mar 11 15:00:40 2019 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Mon, 11 Mar 2019 18:00:40 -0400 Subject: [ih] Internet or internet i In-Reply-To: <1b06b982-8d17-3aad-8a6a-027e30772d0b@3kitty.org> References: <20190311133636.3CC5F18C089@mercury.lcs.mit.edu> <9B6C74D4-155C-4C7D-88A1-CCBD564DED24@comcast.net> <1b06b982-8d17-3aad-8a6a-027e30772d0b@3kitty.org> Message-ID: <3467febf-212b-69cc-44de-7af7fa579dfa@meetinghouse.net> Wait one.? I thought it was now "Intertubes." Miles On 3/11/19 2:50 PM, Jack Haverty wrote: > > Perhaps there are legal aspects involved?? The word "internet" is a > noun, but "Internet" is a proper noun.?? Is "Internet" trademarked, or > copyrighted, or protected by such legal means -- somewhere in the world? > > I suspect there are zillions of things like patent documents with > "internet" and/or "Internet" in them, fodder for lawyers and courts to > argue about for decades. > > You wouldn't believe how many hours I spent as an expert > witness/consultant just arguing about the definitions of "program" and > "reprogram", what exactly the difference was, and how the definitions > changed over time and contexts. > > I wonder when we'll see facebooks as well as Facebook, and why not The > Facebook? > > /Jack > > > On 3/11/19 10:27 AM, Vint Cerf wrote: >> I am in agreement with John D on this - also Associated Press has >> chosen to cease making the distinction. >> i suppose only engineers who understand that the "private internet" >> need not be part of the public one >> and still use the same protocols (and address space!) will be >> offended (I admit, I am one of them). >> >> v >> >> >> On Mon, Mar 11, 2019 at 11:06 AM John Day > > wrote: >> >> But I think Noel?s distinction still holds with the public >> internet being ?Internet? and all others or the concept in >> general being ?internet?. >> >> The problem is the New York Times has decided that ?Internet? is >> spelled ?internet?. >> >> >> >>> On Mar 11, 2019, at 10:07, Scott Brim >> > wrote: >>> >>> Noel: all true, but it's over. It's not ours to call anymore. >>> >>> (The one I really can't get used to is "emails".) >>> _______ >>> internet-history mailing list >>> internet-history at postel.org >>> http://mailman.postel.org/mailman/listinfo/internet-history >>> Contact list-owner at postel.org for >>> assistance. >> >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for >> assistance. >> >> >> >> -- >> New postal address: >> Google >> 1875 Explorer Street, 10th Floor >> Reston, VA 20190 >> >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contactlist-owner at postel.org for assistance. > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at taht.net Mon Mar 11 18:24:31 2019 From: dave at taht.net (Dave Taht) Date: Mon, 11 Mar 2019 18:24:31 -0700 Subject: [ih] The history of the ECN debate? In-Reply-To: <977ac239-3f03-9c61-75ec-e97c6dc926b9@gmail.com> (Brian E. Carpenter's message of "Tue, 12 Mar 2019 11:01:14 +1300") References: <9611121450.aa03214@linus.isoc.org> <9b94312a-a388-1609-f4b1-d4a87923426c@gmail.com> <977ac239-3f03-9c61-75ec-e97c6dc926b9@gmail.com> Message-ID: <87pnqwlwa8.fsf_-_@taht.net> Based on some recent activity by the cable industry to repurpose the last ECN bit for a DCTCP-like architectural thing (see the various l4s and dualpi drafts over here: https://datatracker.ietf.org/group/tsvwg/documents/ and traffic on the tsvwg mailing list. my group (the bufferbloat.net ecn-sane project) ended counter-proposing what we think is a cleaner architectural approach, at this ietf, rather than doing L4S and dualpi, here: https://tools.ietf.org/html/draft-morton-taht-tsvwg-sce-00 ... but forget all that. (there's debate on the bloat and tsvwg mailing lists going on, it's fast and furious, needn't take place here) I'd like to add a history section to the next related draft. All I know is that the ECN "CE MUST == DROP" vs "CE should be an earlier signal than drop" raged for years, with the ultimate recommendation in rfc3168 being CE MUST == DROP. I came (after 7 years of the ecn-enabled fq_codel deployment and lots of tests, to also think we needed an earlier signal than drop too, in an AQM, but hadn't come up with a way to retrofit the idea in a backwards compatible way to rfc3168 until recently. I know some of kk's story behind the ECN debates in 1990 or so, and portions of van's, but not much else, and the RED thing dragged on for years before it was standardized and it took til 2001 til rfc3168 was published with its current interpretation. I'd love to know more. Who was on each side? etc. From touch at strayalpha.com Mon Mar 11 18:52:29 2019 From: touch at strayalpha.com (Joe Touch) Date: Mon, 11 Mar 2019 18:52:29 -0700 Subject: [ih] Internet or internet i In-Reply-To: References: <20190311133636.3CC5F18C089@mercury.lcs.mit.edu> <9B6C74D4-155C-4C7D-88A1-CCBD564DED24@comcast.net> Message-ID: <41801040-90F0-41F9-BEAC-244487BC1FC2@strayalpha.com> You mean the ?associated press?? If they get to revise our preferred capitalization, we do too ;-) Joe > On Mar 11, 2019, at 10:27 AM, Vint Cerf wrote: > > I am in agreement with John D on this - also Associated Press has chosen to cease making the distinction. > i suppose only engineers who understand that the "private internet" need not be part of the public one > and still use the same protocols (and address space!) will be offended (I admit, I am one of them). > > v > > >> On Mon, Mar 11, 2019 at 11:06 AM John Day wrote: >> But I think Noel?s distinction still holds with the public internet being ?Internet? and all others or the concept in general being ?internet?. >> >> The problem is the New York Times has decided that ?Internet? is spelled ?internet?. >> >> >> >>> On Mar 11, 2019, at 10:07, Scott Brim wrote: >>> >>> Noel: all true, but it's over. It's not ours to call anymore. >>> >>> (The one I really can't get used to is "emails".) >>> _______ >>> internet-history mailing list >>> internet-history at postel.org >>> http://mailman.postel.org/mailman/listinfo/internet-history >>> Contact list-owner at postel.org for assistance. >> >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. > > > -- > New postal address: > Google > 1875 Explorer Street, 10th Floor > Reston, VA 20190 > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From touch at strayalpha.com Mon Mar 11 18:55:16 2019 From: touch at strayalpha.com (Joe Touch) Date: Mon, 11 Mar 2019 18:55:16 -0700 Subject: [ih] Internet or internet i In-Reply-To: <1b06b982-8d17-3aad-8a6a-027e30772d0b@3kitty.org> References: <20190311133636.3CC5F18C089@mercury.lcs.mit.edu> <9B6C74D4-155C-4C7D-88A1-CCBD564DED24@comcast.net> <1b06b982-8d17-3aad-8a6a-027e30772d0b@3kitty.org> Message-ID: FYI - this was an issue a long time ago, when a banking company claimed ownership of the word for their ATM (money, not cells) system. There was some deal with the ISOC, but I don?t recall the details. I still use Internet to refer to the one that uses IANA-assigned addresses and ICANN-coordinated DNS. Everything else that uses IP protocols is - to me - an internet, and always will be. I do understand that the associated press don?t understand the difference, but that IMO just makes them both ignorant and wrong, not right. Joe > On Mar 11, 2019, at 11:50 AM, Jack Haverty wrote: > > Perhaps there are legal aspects involved? The word "internet" is a noun, but "Internet" is a proper noun. Is "Internet" trademarked, or copyrighted, or protected by such legal means -- somewhere in the world? > > I suspect there are zillions of things like patent documents with "internet" and/or "Internet" in them, fodder for lawyers and courts to argue about for decades. > > You wouldn't believe how many hours I spent as an expert witness/consultant just arguing about the definitions of "program" and "reprogram", what exactly the difference was, and how the definitions changed over time and contexts. > > I wonder when we'll see facebooks as well as Facebook, and why not The Facebook? > > /Jack > > > >> On 3/11/19 10:27 AM, Vint Cerf wrote: >> I am in agreement with John D on this - also Associated Press has chosen to cease making the distinction. >> i suppose only engineers who understand that the "private internet" need not be part of the public one >> and still use the same protocols (and address space!) will be offended (I admit, I am one of them). >> >> v >> >> >>> On Mon, Mar 11, 2019 at 11:06 AM John Day wrote: >>> But I think Noel?s distinction still holds with the public internet being ?Internet? and all others or the concept in general being ?internet?. >>> >>> The problem is the New York Times has decided that ?Internet? is spelled ?internet?. >>> >>> >>> >>>> On Mar 11, 2019, at 10:07, Scott Brim wrote: >>>> >>>> Noel: all true, but it's over. It's not ours to call anymore. >>>> >>>> (The one I really can't get used to is "emails".) >>>> _______ >>>> internet-history mailing list >>>> internet-history at postel.org >>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>> Contact list-owner at postel.org for assistance. >>> >>> _______ >>> internet-history mailing list >>> internet-history at postel.org >>> http://mailman.postel.org/mailman/listinfo/internet-history >>> Contact list-owner at postel.org for assistance. >> >> >> -- >> New postal address: >> Google >> 1875 Explorer Street, 10th Floor >> Reston, VA 20190 >> >> >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From reed at reedmedia.net Mon Mar 11 19:46:58 2019 From: reed at reedmedia.net (reed at reedmedia.net) Date: Mon, 11 Mar 2019 21:46:58 -0500 (CDT) Subject: [ih] Where was the first host name table? Message-ID: RFC 15 (1969-09-25) says the telnet access to a serving host connects using an official site name, such as SRI, UCLA, UCSB, or UTAH. But where or how are these names mapped to its network address? What would map UCLA to 1 and SRI to 2 and USCB to 3 and UTAH to 4 for example? Does this name mapping source code and data exist? RFC 11 about the Operating System of the UCLA HOST describes some tables for remote host number, connection number, and input link number. If I am reading correctly 3.4.1(a)(ii) alphabetical letters map to a bit for the link number. Any other early documentation on mapping names to numbers? Also would an early mapping need to know a link number and a host number? I see RFC 76 (1970-10) proposes a way to ask a host for socket number by name (such as TTY). But how to get to that host in the first place using a name? RFC 606 (1973-12) says each site maintains its on host list. An example of names are in RFC 235 Site status and later updates. I don't see any standard format until proposed in RFC 606 and then RFC 608 (1974-01) which first introduces HOSTS.TXT. (Or where is HOSTS.TXT as a name documented prior to that?) I'd like to understand the use of names between systems prior to HOSTS.TXT. (By the way, are there any recorded or logged output of TELNET, FTP, and FTP MAIL sessions in that early 1970's environment, so I can better understand the real use?) Thanks, Jeremy C. Reed echo 'EhZ[h ^jjf0%%h[[Zc[Z_W$d[j%Xeeai%ZW[ced#]dk#f[d]k_d%' | \ tr '#-~' '\-.-{' From jeanjour at comcast.net Mon Mar 11 20:35:17 2019 From: jeanjour at comcast.net (John Day) Date: Mon, 11 Mar 2019 23:35:17 -0400 Subject: [ih] Where was the first host name table? In-Reply-To: References: Message-ID: <58BD6A83-07F7-4533-8E04-1658344845A9@comcast.net> Again we have another example of ?the effect of T. S. Eliot on Shakespeare.? It was a short table. Everyone?s implementation did their own table. I think the official list was at the NIC. That was before the network map would no longer fit on one 8.5 x 11 sheet of paper and you couldn?t print who was up and who was down from a well-known port at the NMC. > On Mar 11, 2019, at 22:46, reed at reedmedia.net wrote: > > RFC 15 (1969-09-25) says the telnet access to a serving host connects > using an official site name, such as SRI, UCLA, UCSB, or UTAH. > > But where or how are these names mapped to its network address? > What would map UCLA to 1 and SRI to 2 and USCB to 3 and UTAH to 4 for > example? > > Does this name mapping source code and data exist? > > RFC 11 about the Operating System of the UCLA HOST describes some tables > for remote host number, connection number, and input link number. If I > am reading correctly 3.4.1(a)(ii) alphabetical letters map to a bit for > the link number. > > Any other early documentation on mapping names to numbers? > Also would an early mapping need to know a link number and a host > number? > > I see RFC 76 (1970-10) proposes a way to ask a host for socket number > by name (such as TTY). But how to get to that host in the first place > using a name? > > RFC 606 (1973-12) says each site maintains its on host list. An example > of names are in RFC 235 Site status and later updates. I don't see any > standard format until proposed in RFC 606 and then RFC 608 (1974-01) > which first introduces HOSTS.TXT. (Or where is HOSTS.TXT as a name > documented prior to that?) > > I'd like to understand the use of names between systems prior to > HOSTS.TXT. > > (By the way, are there any recorded or logged output of TELNET, FTP, and > FTP MAIL sessions in that early 1970's environment, so I can better > understand the real use?) > > Thanks, > > Jeremy C. Reed > > echo 'EhZ[h ^jjf0%%h[[Zc[Z_W$d[j%Xeeai%ZW[ced#]dk#f[d]k_d%' | \ > tr '#-~' '\-.-{' > > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. From cos at aaaaa.org Mon Mar 11 20:43:12 2019 From: cos at aaaaa.org (Ofer Inbar) Date: Mon, 11 Mar 2019 22:43:12 -0500 Subject: [ih] Internet or internet i In-Reply-To: <1b06b982-8d17-3aad-8a6a-027e30772d0b@3kitty.org> References: <20190311133636.3CC5F18C089@mercury.lcs.mit.edu> <9B6C74D4-155C-4C7D-88A1-CCBD564DED24@comcast.net> <1b06b982-8d17-3aad-8a6a-027e30772d0b@3kitty.org> Message-ID: <20190312034312.GC1639@mip.aaaaa.org> On Mon, Mar 11, 2019 at 11:50:51AM -0700, Jack Haverty wrote: > I wonder when we'll see facebooks as well as Facebook, and why not The > Facebook? It'll be approximately -16 years for the latter, and further into the past for the former. Facebook started out as "thefacebook.com", and it got its name from the many face books college students were used to perusing at the beginning of the school year to see who all the incoming new students were (and possibly to find out which dorms people they recognized were in, etc.) Every college had facebooks printed on paper, and "thefacebook" was offering you the new online version of those. The last email I have saved from thefacebook.com is from Sep 2005, and then I have one from Oct 2005 from facebook.com, so I guess that is when they transitioned to the dns name without "the". -- Cos From brian.e.carpenter at gmail.com Mon Mar 11 20:54:32 2019 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Tue, 12 Mar 2019 16:54:32 +1300 Subject: [ih] Internet or internet i In-Reply-To: References: <20190311133636.3CC5F18C089@mercury.lcs.mit.edu> <9B6C74D4-155C-4C7D-88A1-CCBD564DED24@comcast.net> <1b06b982-8d17-3aad-8a6a-027e30772d0b@3kitty.org> Message-ID: <5d2c0532-4ba3-5e5b-ff11-5614bf2d5cd2@gmail.com> The chicago manual of style has adopted the same heresy. Think of them the next time you fly via the o'hare airport, which is in the illinois. If vladimir gets his way, I suppose people will realise that there's a difference between the russian internet and the other one. But maybe we should blame it all on stev or kre for starting with the lower case? I do have a message from stev sent in 1993 that ends: > stev knowles > one of the Internet Area Directors > stev at ftp.com so he did have a shift key ;-) Regards Brian On 12-Mar-19 14:55, Joe Touch wrote: > FYI - this was an issue a long time ago, when a banking company claimed ownership of the word for their ATM (money, not cells) system. There was some deal with the ISOC, but I don?t recall the details. > > I still use Internet to refer to the one that uses IANA-assigned addresses and ICANN-coordinated DNS. Everything else that uses IP protocols is - to me - an internet, and always will be. > > I do understand that the associated press don?t understand the difference, but that IMO just makes them both ignorant and wrong, not right. > > Joe > > On Mar 11, 2019, at 11:50 AM, Jack Haverty > wrote: > >> Perhaps there are legal aspects involved?? The word "internet" is a noun, but "Internet" is a proper noun.?? Is "Internet" trademarked, or copyrighted, or protected by such legal means -- somewhere in the world? >> >> I suspect there are zillions of things like patent documents with "internet" and/or "Internet" in them, fodder for lawyers and courts to argue about for decades. >> >> You wouldn't believe how many hours I spent as an expert witness/consultant just arguing about the definitions of "program" and "reprogram", what exactly the difference was, and how the definitions changed over time and contexts. >> >> I wonder when we'll see facebooks as well as Facebook, and why not The Facebook? >> >> /Jack >> >> >> On 3/11/19 10:27 AM, Vint Cerf wrote: >>> I am in agreement with John D on this - also Associated Press has chosen to cease making the distinction. >>> i suppose only engineers who understand that the "private internet" need not be part of the public one >>> and still use the same protocols (and address space!) will be offended (I admit, I am one of them). >>> >>> v >>> >>> >>> On Mon, Mar 11, 2019 at 11:06 AM John Day > wrote: >>> >>> But I think Noel?s distinction still holds with the public internet being ?Internet? and all others or the concept in general being ?internet?. >>> >>> The problem is the New York Times has decided that ?Internet? is spelled ?internet?. >>> >>> >>> >>>> On Mar 11, 2019, at 10:07, Scott Brim > wrote: >>>> >>>> Noel: all true, but it's over. It's not ours to call anymore.? >>>> >>>> (The one I really can't get used to is "emails".) >>>> _______ >>>> internet-history mailing list >>>> internet-history at postel.org >>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>> Contact list-owner at postel.org for assistance. >>> >>> _______ >>> internet-history mailing list >>> internet-history at postel.org >>> http://mailman.postel.org/mailman/listinfo/internet-history >>> Contact list-owner at postel.org for assistance. >>> >>> >>> >>> -- >>> New postal address: >>> Google >>> 1875 Explorer Street, 10th Floor >>> Reston, VA 20190 >>> >>> _______ >>> internet-history mailing list >>> internet-history at postel.org >>> http://mailman.postel.org/mailman/listinfo/internet-history >>> Contact list-owner at postel.org for assistance. >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > From karl at cavebear.com Mon Mar 11 22:41:11 2019 From: karl at cavebear.com (Karl Auerbach) Date: Mon, 11 Mar 2019 22:41:11 -0700 Subject: [ih] SDC IMP? Message-ID: Some very early maps of the ARPAnet show an IMP at SDC (System Development Corporation) in Santa Monica. However, during my time there (roughly 1971 through 1980) I never saw hide nor hair of one (and given that our group under Clark Weissman and Gerry Cole was the group involved with these sorts of things, we would have been the group that would most likely have known about it.) Does anyone know more about the SDC IMP?? Did it actually exist? If so, any details? ??? ??? --karl-- From steve at shinkuro.com Mon Mar 11 22:57:00 2019 From: steve at shinkuro.com (Steve Crocker) Date: Tue, 12 Mar 2019 14:57:00 +0900 Subject: [ih] SDC IMP? In-Reply-To: References: Message-ID: <4DEEC3E9-B6CA-4109-B026-3FE39E214264@shinkuro.com> I?m not sure I ever laid eyes on it but I?m pretty sure it was installed and eventually connected to a host. In the great Telnet bake-off in October 1971, each site tried to connect to each other site. A big table was filled in at the meeting at MIT. SDC was distinguished by being the only site whose row and column were empty, ie no one could connect them and they couldn?t connect to anyone. I believe they came alive shortly after that. Steve Sent from my iPhone > On Mar 12, 2019, at 2:41 PM, Karl Auerbach wrote: > > Some very early maps of the ARPAnet show an IMP at SDC (System > Development Corporation) in Santa Monica. > > However, during my time there (roughly 1971 through 1980) I never saw > hide nor hair of one (and given that our group under Clark Weissman and > Gerry Cole was the group involved with these sorts of things, we would > have been the group that would most likely have known about it.) > > Does anyone know more about the SDC IMP? Did it actually exist? If so, > any details? > > --karl-- > > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. From steve at shinkuro.com Mon Mar 11 23:28:22 2019 From: steve at shinkuro.com (Steve Crocker) Date: Tue, 12 Mar 2019 15:28:22 +0900 Subject: [ih] SDC IMP? In-Reply-To: References: Message-ID: I can add that SDC was one of the five major speech understanding research projects that IPTO sponsored during the 1971-76 timeframe. I don?t have a specific visual recall of their network interactions, but they could not have been part of the project without a working network connection. (I was the program manager for this work from 1971 to 1974.) Steve Sent from my iPhone > On Mar 12, 2019, at 2:41 PM, Karl Auerbach wrote: > > Some very early maps of the ARPAnet show an IMP at SDC (System > Development Corporation) in Santa Monica. > > However, during my time there (roughly 1971 through 1980) I never saw > hide nor hair of one (and given that our group under Clark Weissman and > Gerry Cole was the group involved with these sorts of things, we would > have been the group that would most likely have known about it.) > > Does anyone know more about the SDC IMP? Did it actually exist? If so, > any details? > > --karl-- > > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. From aamsendonly396 at gmail.com Tue Mar 12 07:51:56 2019 From: aamsendonly396 at gmail.com (Alex McKenzie) Date: Tue, 12 Mar 2019 10:51:56 -0400 Subject: [ih] SDC IMP? In-Reply-To: References: Message-ID: J. Krezner of SDC was the author of RFC 17, dated 27 August 1969. I imagine that whatever SDC group J. Krezner was in had the ARPA contract that led to SDC getting an IMP. The SDC IMP was #8, probably installed around April or May of 1970. The first Host connected to it was an IBM 360/67. The technical liaison for that Host in late 1971 was Bob Long. All of the above comes from the RFC archives. Cheers, Alex McKenzie On Tue, Mar 12, 2019 at 2:00 AM Karl Auerbach wrote: > Some very early maps of the ARPAnet show an IMP at SDC (System > Development Corporation) in Santa Monica. > > However, during my time there (roughly 1971 through 1980) I never saw > hide nor hair of one (and given that our group under Clark Weissman and > Gerry Cole was the group involved with these sorts of things, we would > have been the group that would most likely have known about it.) > > Does anyone know more about the SDC IMP? Did it actually exist? If so, > any details? > > --karl-- > > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karl at cavebear.com Tue Mar 12 08:40:07 2019 From: karl at cavebear.com (Karl Auerbach) Date: Tue, 12 Mar 2019 08:40:07 -0700 Subject: [ih] SDC IMP? In-Reply-To: References: Message-ID: Hmmm, I managed some of the computers (PDP 11/40 then 45 then 70) used by the "continuous speech recognition" project.? The name I remember is Iris Kameny (sp).? Billy Breckenridge also worked on it. I also remember the name Mort Bernstein.? (We had a really nice soundproof room and graphical displays - it was used in a movie about searching for bigfoot. ;-)? Those machines weren't hooked to an IMP. I believe that we did have a "Santa Barbara box" - which, if I remember correctly was an IBM channel-to-IMP interface - from Roland Bryan's crew at ACC in Santa Barbara.?? That was "owned" by Lee Mo (or Moho?).? But it seemed to be not in use during my time, although there was a lot of active work being done on one of the IBM 360 machines (probably the /67) that used a lot of LISP stuff.? I can't remember the name of the person who ran that project, but I do remember he smoked really foul cigars and had a sofa in his office that was utterly saturated with cigar smoke. The networking and operating system R&D group at SDC wasn't all that large and we were all concentrated into one floor of the main building (until we expanded into a secured space in the Q7 building (named after the Q7 SAGE computer) for the Blacker secure OS and networking project later in the 1970's.) I found (and bought) a book about SDC, but it barely touches on networking at all. ??? --karl-- On 3/11/19 11:28 PM, Steve Crocker wrote: > I can add that SDC was one of the five major speech understanding research projects that IPTO sponsored during the 1971-76 timeframe. I don?t have a specific visual recall of their network interactions, but they could not have been part of the project without a working network connection. (I was the program manager for this work from 1971 to 1974.) > > Steve > > Sent from my iPhone > >> On Mar 12, 2019, at 2:41 PM, Karl Auerbach wrote: >> >> Some very early maps of the ARPAnet show an IMP at SDC (System >> Development Corporation) in Santa Monica. >> >> However, during my time there (roughly 1971 through 1980) I never saw >> hide nor hair of one (and given that our group under Clark Weissman and >> Gerry Cole was the group involved with these sorts of things, we would >> have been the group that would most likely have known about it.) >> >> Does anyone know more about the SDC IMP? Did it actually exist? If so, >> any details? >> >> --karl-- >> >> >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. From bill.n1vux at gmail.com Tue Mar 12 13:41:36 2019 From: bill.n1vux at gmail.com (Bill Ricker) Date: Tue, 12 Mar 2019 16:41:36 -0400 Subject: [ih] Where was the first host name table? In-Reply-To: <58BD6A83-07F7-4533-8E04-1658344845A9@comcast.net> References: <58BD6A83-07F7-4533-8E04-1658344845A9@comcast.net> Message-ID: On Mon, Mar 11, 2019 at 11:59 PM John Day wrote: > > Again we have another example of ?the effect of T. S. Eliot on Shakespeare.? Shakespeare just strung a bunch of book-titles together to make dialog. Bill Ricker bill.n1vux at gmail.com https://www.linkedin.com/in/n1vux From johnl at iecc.com Wed Mar 13 01:57:44 2019 From: johnl at iecc.com (John Levine) Date: 13 Mar 2019 17:57:44 +0900 Subject: [ih] Internet or internet i In-Reply-To: <1b06b982-8d17-3aad-8a6a-027e30772d0b@3kitty.org> Message-ID: <20190313085745.E324F200FE1286@ary.local> In article <1b06b982-8d17-3aad-8a6a-027e30772d0b at 3kitty.org> you write: >Perhaps there are legal aspects involved?? The word "internet" is a >noun, but "Internet" is a proper noun.?? Is "Internet" trademarked, or >copyrighted, or protected by such legal means -- somewhere in the world? If it is, it isn't anywhere that matters. There are plenty of trademarks that include the word "Internet" with other words or symbols, but not of the word alone. Copyright of a single word makes no sense. I would prefer to keep the distinction between global Internet and private internet, but that battle is lost and not worth refighting. R's, John From joly at punkcast.com Wed Mar 13 03:48:56 2019 From: joly at punkcast.com (Joly MacFie) Date: Wed, 13 Mar 2019 06:48:56 -0400 Subject: [ih] WEBCAST TODAY: World Wide Web 30th Anniversary Celebrations Message-ID: I've never seen an html formatted post on this list so here's hoping! This is a restream of the two events in Geneva and London yesterday, with some fixing of sync and levels. London featured a surprise appearance by the city's Mayor. And even royalty have shown interest in Sir Tim's NeXT. [image: livestream] In 1989 the world?s largest physics laboratory, *CERN *, was a hive of ideas and information stored on multiple incompatible computers. *Sir Tim Berners-Lee* envisioned a unifying structure for linking information across different computers, and wrote a proposal in March 1989 called "*Information Management: A Proposal *". By 1991 this vision of universal connectivity had become the World Wide Web. To celebrate 30 years since Sir Tim Berners-Lee's proposal and to kick-start a series of celebrations worldwide, CERN hosted a *30th Anniversary event * in the morning of 12 March 2019 in partnership with the *World Wide Web Consortium * (W3C) and with the *World Wide Web Foundation *. Later in the day the *Science Museum * in London, the home of *the original NeXT Computer *used by Sir Tim to design the World Wide Web, ran a *second event *, also in partnership with the World Wide Web Foundation. Sir Tim spoke at both events, and both will be restreamed in full today* Wednesday 13 March* on the *Internet Society Livestream Channel *, starting at *09:00 EDT* (13:00 UTC). *VIEW ON LIVESTREAM*: https://livestream.com/internetsociety/web30 (No captions) CERN GENEVA *Welcome and Introduction* - Welcome by *Anna Cook* - master of ceremonies - Opening talk by *Fabiola Gianotti* - CERN Director General *Let?s Share What We Know* - panel discussion - Chair: *Fr?d?ric Donck* - Speakers: *Tim Berners-Lee*, *Robert Cailliau*, *Jean-Fran?ois Groff*, *Lou Montulli*, *Zeynep Tufekci* *For Everyone *- conversation - Sir *Tim Berners-Lee* and *Bruno Giussani* *Towards the Future* - panel discussion - Chair:* Bruno Giussani* - Speakers: *Doreen Bogdan-Martin*, *Jovan Kurbalija*, *Monique Morrow*, *Zeynep Tufekci* *Closing Remarks* - *Charlotte Warakaulle* - CERN Director for International Relations *PHOTOS*: https://cds.cern.ch/record/2665683 SCIENCE MUSEUM LONDON - *Imogen Heap* ? Grammy Award-winning singer, songwriter and producer. - *Matt Brittin* ? President, EMEA Business & Operations at Google - *Roya Mahboob* ? The NewNow Leader, Tech Entrepreneur & Women?s Rights Activist - *Taylor Wilson* ? The NewNow Leader, Nuclear Physicist, Science Advocate & Inventor - *Dr. Anne-Marie Imafido*n MBE ? Technology thought leader and founder and CEO of STEMettes - *Sadiq Khan*, Mayor of London - *Sir Tim Berners-Lee* in conversation with BBC journalist *Samira Ahmed* *TWITTER*: #web30 https://bit.ly/web30tweets *Permalink* https://isoc.live/10969/ - -- --------------------------------------------------------------- Joly MacFie 218 565 9365 Skype:punkcast -------------------------------------------------------------- - -------------- next part -------------- An HTML attachment was scrubbed... URL: From vint at google.com Wed Mar 13 05:31:48 2019 From: vint at google.com (Vint Cerf) Date: Wed, 13 Mar 2019 08:31:48 -0400 Subject: [ih] Internet or internet i In-Reply-To: References: <20190311133636.3CC5F18C089@mercury.lcs.mit.edu> <9B6C74D4-155C-4C7D-88A1-CCBD564DED24@comcast.net> <1b06b982-8d17-3aad-8a6a-027e30772d0b@3kitty.org> Message-ID: CNRI fought a 10 year battle over the use of the term "Internet" - the MOST ATM system got a trademark on "Internet" somewhere around 1988/1990 (guessing) and I discovered that while I was VP at CNRI. Bob Kahn's wife, Patrice, is a copyright attorney and fought over this misappropriation that was aided by a clueless trademark office. After 10 years and $100K underwritten by CNRI, the Trademark Tribunal agreed that the term "Internet" had to refer to the global network we have all built. The bank group that trademarked the term did not use it to refer to their ATM system (which they called MOST) -I think they only used in to refer to their newsletter or something. Copying Bob K and Patrice L to repair any damage I may have done to facts. vint On Mon, Mar 11, 2019 at 10:23 PM Joe Touch wrote: > FYI - this was an issue a long time ago, when a banking company claimed > ownership of the word for their ATM (money, not cells) system. There was > some deal with the ISOC, but I don?t recall the details. > > I still use Internet to refer to the one that uses IANA-assigned addresses > and ICANN-coordinated DNS. Everything else that uses IP protocols is - to > me - an internet, and always will be. > > I do understand that the associated press don?t understand the difference, > but that IMO just makes them both ignorant and wrong, not right. > > Joe > > On Mar 11, 2019, at 11:50 AM, Jack Haverty wrote: > > Perhaps there are legal aspects involved? The word "internet" is a noun, > but "Internet" is a proper noun. Is "Internet" trademarked, or > copyrighted, or protected by such legal means -- somewhere in the world? > > I suspect there are zillions of things like patent documents with > "internet" and/or "Internet" in them, fodder for lawyers and courts to > argue about for decades. > > You wouldn't believe how many hours I spent as an expert > witness/consultant just arguing about the definitions of "program" and > "reprogram", what exactly the difference was, and how the definitions > changed over time and contexts. > > I wonder when we'll see facebooks as well as Facebook, and why not The > Facebook? > > /Jack > > > On 3/11/19 10:27 AM, Vint Cerf wrote: > > I am in agreement with John D on this - also Associated Press has chosen > to cease making the distinction. > i suppose only engineers who understand that the "private internet" need > not be part of the public one > and still use the same protocols (and address space!) will be offended (I > admit, I am one of them). > > v > > > On Mon, Mar 11, 2019 at 11:06 AM John Day wrote: > >> But I think Noel?s distinction still holds with the public internet being >> ?Internet? and all others or the concept in general being ?internet?. >> >> The problem is the New York Times has decided that ?Internet? is spelled >> ?internet?. >> >> >> >> On Mar 11, 2019, at 10:07, Scott Brim wrote: >> >> Noel: all true, but it's over. It's not ours to call anymore. >> >> (The one I really can't get used to is "emails".) >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. >> >> >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. >> > > > -- > New postal address: > Google > 1875 Explorer Street, 10th Floor > Reston, VA 20190 > > _______ > internet-history mailing listinternet-history at postel.orghttp://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > -- New postal address: Google 1875 Explorer Street, 10th Floor Reston, VA 20190 -------------- next part -------------- An HTML attachment was scrubbed... URL: From alejandroacostaalamo at gmail.com Wed Mar 13 06:37:57 2019 From: alejandroacostaalamo at gmail.com (Alejandro Acosta) Date: Wed, 13 Mar 2019 22:37:57 +0900 Subject: [ih] WEBCAST TODAY: World Wide Web 30th Anniversary Celebrations In-Reply-To: References: Message-ID: <61d2b4be-4fc0-13aa-2c40-435931ffb86d@gmail.com> it would have been sad if an _html_ email regarding _www_? had not passed :-( El 13/3/19 a las 19:48, Joly MacFie escribi?: > I've never seen an html formatted post on this list so here's hoping! > This is a restream of the two events in Geneva and London yesterday, > with some fixing of sync and levels.? London featured a surprise > appearance by the city's Mayor. And even royalty have?shown interest > ?in Sir > Tim's NeXT.? > > > livestream In 1989 the > world?s largest physics laboratory,?*CERN *, was a > hive of ideas and information stored on multiple incompatible > computers.?*Sir Tim Berners-Lee*?envisioned a unifying structure for > linking information across different computers, and wrote a proposal > in March 1989 called "*Information Management: A Proposal > *". > > By 1991 this vision of universal connectivity had become the World > Wide Web. > > To celebrate 30 years since Sir Tim Berners-Lee's proposal and to > kick-start a series of celebrations worldwide, CERN hosted a?*30th > Anniversary event *?in the morning of 12 > March 2019 in partnership with the?*World Wide Web Consortium > *?(W3C) and with the?*World Wide Web Foundation > *. Later in the day the?*Science Museum > *?in London, the home of?*the > original NeXT Computer ?*used by Sir Tim to > design the World Wide Web, ran a?*second event > *, > also in partnership with the World Wide Web Foundation. > > Sir Tim spoke at both events, and both will be restreamed in full > today*?Wednesday 13 March*?on the?*Internet Society Livestream Channel > *, starting at?*09:00 > EDT*?(13:00 UTC). > > *VIEW ON LIVESTREAM*:?https://livestream.com/internetsociety/web30(No > captions) > > > CERN GENEVA > > *Welcome and Introduction* > > * Welcome by?*Anna Cook*?- master of ceremonies > * Opening talk by?*Fabiola Gianotti*?- CERN Director General > > *Let?s Share What We Know*?- panel discussion > > * Chair:?*Fr?d?ric Donck* > * Speakers:?*Tim Berners-Lee*,?*Robert Cailliau*,?*Jean-Fran?ois > Groff*,?*Lou Montulli*,?*Zeynep Tufekci* > > *For Everyone?*- conversation > > * Sir?*Tim Berners-Lee*?and?*Bruno Giussani* > > *Towards the Future*?- panel discussion > > * Chair:*?Bruno Giussani* > * Speakers:?*Doreen Bogdan-Martin*,?*Jovan Kurbalija*,?*Monique > Morrow*,?*Zeynep Tufekci* > > *Closing Remarks* > > * *Charlotte Warakaulle*?- CERN Director for International Relations > > *PHOTOS*:?https://cds.cern.ch/record/2665683 > > > SCIENCE MUSEUM LONDON > > * *Imogen Heap*?? Grammy Award-winning singer, songwriter and producer. > * *Matt Brittin*?? President, EMEA Business & Operations at Google > * *Roya Mahboob*?? The NewNow Leader, Tech Entrepreneur & Women?s > Rights Activist > * *Taylor Wilson*?? The NewNow Leader, Nuclear Physicist, Science > Advocate & Inventor > * *Dr. Anne-Marie Imafido*n MBE ? Technology thought leader and > founder and CEO of STEMettes > * *Sadiq Khan*, Mayor of London > * *Sir Tim Berners-Lee*?in conversation with BBC journalist?*Samira > Ahmed* > > ? > > *TWITTER*: #web30?https://bit.ly/web30tweets > > *Permalink* > https://isoc.live/10969/ > > > > Web Bug from > http://pixel.wp.com/b.gif?blog=151329629&post=10969&subd=isoc.live&ref=&email=1&email_o=jetpack&host=jetpack.wordpress.comWeb > Bug from > http://pixel.wp.com/t.gif?email_name=email_subscription&blog_id=151329629&_ui=2239291&email_id=9f239d0b181fb5f69f93f9b47b4e21eb&date_sent=2019-03-13&domain=isoc.live&frequency=0&digest=0&has_html=1&locale=en&_en=wpcom_email_open&browser_type=php-agent&_aua=wpcom-tracks-client-v0.3&_ul=wwwhatsup1&_ut=wpcom%3Auser_id&blog_tz=-4&blog_lang=0&user_lang=en&_ts=1552472305272 > > > - > > -- > --------------------------------------------------------------- > Joly MacFie? 218 565 9365 Skype:punkcast > -------------------------------------------------------------- > - > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From joly at punkcast.com Wed Mar 13 11:47:48 2019 From: joly at punkcast.com (Joly MacFie) Date: Wed, 13 Mar 2019 14:47:48 -0400 Subject: [ih] Internet or internet i In-Reply-To: References: <20190311133636.3CC5F18C089@mercury.lcs.mit.edu> <9B6C74D4-155C-4C7D-88A1-CCBD564DED24@comcast.net> <1b06b982-8d17-3aad-8a6a-027e30772d0b@3kitty.org> Message-ID: Googling turns up this from the Wapo - IT'S AN INTER-KNOT from March 1995 https://www.washingtonpost.com/archive/business/1995/03/20/its-an-inter-knot/31a62642-33af-4e98-9b16-28f9f02b2932/ *"The Internet, as a technology and as a global network, was well established even in 1984," argued Tony Rutkowski, executive director of the Internet Society. "It's always been a generic term." * and this from 1996 https://ipmall.law.unh.edu/content/ttab-trademark-trial-and-appeal-board-1-internet-inc-v-internet-society-and-corporation which appears (IANAL) to be a preliminary judgement dismissing a countersuit. *In this case, petitioners have failed to state a claim showing that they are entitled to relief under the "false suggestion of a connection" provision of Section 2(a) because they have not alleged, nor can it reasonably be inferred from their pleading, that INTERNET points "uniquely and unmistakably" to petitioners' own identity or persona. Indeed, by the affirmative allegations in their pleading, petitioners have admitted that INTERNET is not their name or identity, but rather the name of the "network of networks" itself.* It seems that the case is citeable http://bit.ly/2Cfcxtm Did it really take 10 years? ' j On Wed, Mar 13, 2019 at 8:31 AM Vint Cerf wrote: > CNRI fought a 10 year battle over the use of the term "Internet" - the > MOST ATM system got a trademark on "Internet" somewhere around 1988/1990 > (guessing) and I discovered that while I was VP at CNRI. Bob Kahn's wife, > Patrice, is a copyright attorney and fought over this misappropriation that > was aided by a clueless trademark office. After 10 years and $100K > underwritten by CNRI, the Trademark Tribunal agreed that the term > "Internet" had to refer to the global network we have all built. The bank > group that trademarked the term did not use it to refer to their ATM system > (which they called MOST) -I think they only used in to refer to their > newsletter or something. Copying Bob K and Patrice L to repair any damage I > may have done to facts. > > vint > > > On Mon, Mar 11, 2019 at 10:23 PM Joe Touch wrote: > >> FYI - this was an issue a long time ago, when a banking company claimed >> ownership of the word for their ATM (money, not cells) system. There was >> some deal with the ISOC, but I don?t recall the details. >> >> I still use Internet to refer to the one that uses IANA-assigned >> addresses and ICANN-coordinated DNS. Everything else that uses IP protocols >> is - to me - an internet, and always will be. >> >> I do understand that the associated press don?t understand the >> difference, but that IMO just makes them both ignorant and wrong, not right. >> >> Joe >> >> On Mar 11, 2019, at 11:50 AM, Jack Haverty wrote: >> >> Perhaps there are legal aspects involved? The word "internet" is a noun, >> but "Internet" is a proper noun. Is "Internet" trademarked, or >> copyrighted, or protected by such legal means -- somewhere in the world? >> >> I suspect there are zillions of things like patent documents with >> "internet" and/or "Internet" in them, fodder for lawyers and courts to >> argue about for decades. >> >> You wouldn't believe how many hours I spent as an expert >> witness/consultant just arguing about the definitions of "program" and >> "reprogram", what exactly the difference was, and how the definitions >> changed over time and contexts. >> >> I wonder when we'll see facebooks as well as Facebook, and why not The >> Facebook? >> >> /Jack >> >> >> On 3/11/19 10:27 AM, Vint Cerf wrote: >> >> I am in agreement with John D on this - also Associated Press has chosen >> to cease making the distinction. >> i suppose only engineers who understand that the "private internet" need >> not be part of the public one >> and still use the same protocols (and address space!) will be offended (I >> admit, I am one of them). >> >> v >> >> >> On Mon, Mar 11, 2019 at 11:06 AM John Day wrote: >> >>> But I think Noel?s distinction still holds with the public internet >>> being ?Internet? and all others or the concept in general being ?internet?. >>> >>> The problem is the New York Times has decided that ?Internet? is spelled >>> ?internet?. >>> >>> >>> >>> On Mar 11, 2019, at 10:07, Scott Brim wrote: >>> >>> Noel: all true, but it's over. It's not ours to call anymore. >>> >>> (The one I really can't get used to is "emails".) >>> _______ >>> internet-history mailing list >>> internet-history at postel.org >>> http://mailman.postel.org/mailman/listinfo/internet-history >>> Contact list-owner at postel.org for assistance. >>> >>> >>> _______ >>> internet-history mailing list >>> internet-history at postel.org >>> http://mailman.postel.org/mailman/listinfo/internet-history >>> Contact list-owner at postel.org for assistance. >>> >> >> >> -- >> New postal address: >> Google >> 1875 Explorer Street, 10th Floor >> Reston, VA 20190 >> >> _______ >> internet-history mailing listinternet-history at postel.orghttp://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. >> >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. >> >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. >> > > > -- > New postal address: > Google > 1875 Explorer Street, 10th Floor > Reston, VA 20190 > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > -- --------------------------------------------------------------- Joly MacFie 218 565 9365 Skype:punkcast -------------------------------------------------------------- - -------------- next part -------------- An HTML attachment was scrubbed... URL: From joly at punkcast.com Mon Mar 18 04:46:11 2019 From: joly at punkcast.com (Joly MacFie) Date: Mon, 18 Mar 2019 07:46:11 -0400 Subject: [ih] =?utf-8?q?WEBCAST_TODAY=3A_Scott_Bradner_=E2=80=93_A_History?= =?utf-8?q?_of_the_Internet?= Message-ID: [Apologies for boilerplate description! I will upload this to YouTube and Internet Archive when done - volunteer transcribers invited!] No one is more aware of the variety of strands that comprise Internet history than those who participated in its early development. Thus the indefinite article in this talk's title, as in our previous webcast of the same name by Dave Farber. Scott gives a concise chronology and then, in the Q&A, expands on the current landscape. [image: livestream] Today, *Monday March 18 2019*, at *10am ET* (14:00 UTC) the *Internet Society Livestream Channel * will webcast an edited version of *Scott Bradner *'s recent presentation '*A History of the Internet *' at the *Berkman Klein Institute * in Boston. A veteran of ARPANET and the IETF, among Scott Bradner's many roles are Internet Society Trustee from 1993 to 1996 and secretary of the Board of Trustees from 2003 to 2016. This talk provides a history of the reasons for and the technology of the Internet. It also presents some of the reasons that the Internet has had such an impact and some of the challenges that may cause the Internet of tomorrow to be significantly less revolutionary than the Internet to date. The presentation is followed by a vigorous Q&A. *View on Livestream: https://livestream.com/internetsociety/scottbradner * (No captions) *Twitter: Scott Bradner https://bit.ly/scottbradner * *Permalink* https://isoc.live/10981/ -- --------------------------------------------------------------- Joly MacFie 218 565 9365 Skype:punkcast -------------------------------------------------------------- - -------------- next part -------------- An HTML attachment was scrubbed... URL: From gregskinner0 at icloud.com Tue Mar 19 16:17:59 2019 From: gregskinner0 at icloud.com (Greg Skinner) Date: Tue, 19 Mar 2019 16:17:59 -0700 Subject: [ih] Reconstitution Protocol (was When the words Internet was design to survive a nuclear war appeared for the first time? thanks) In-Reply-To: References: <1419881554.1321723.1550542202878@mail.yahoo.com> Message-ID: <18851A9E-282B-47AC-9C42-601BCBAD6069@icloud.com> I found some more information about the Reconstitution Protocol (RP) project. SRI published a final report in June 1987 describing RP architecture, implementation, and experiments. The report cites Radia Perlman?s improved network partitioning paper , which was also a topic for discussion at an Internet Meeting held at MIT in May 1980. Jim Mathis presented a set of slides called ?Automated Reconstitution Using Airborne Packet Radios? at IETF 1 One of the slides is of an experiment that took place in 1981, involving the reconstitution of a packet radio network. I remember there was a demo of RP, perhaps early in 1986, for several IETF members, including Dave Clark. (I remember him specifically because after the demo ended, he wanted to have dinner, so he was inquiring ?foodp?? of other attendees.) ?gregbo > On Feb 19, 2019, at 4:38 AM, Vint Cerf wrote: > > Barbara is right about the SRI role in the SAC tests - I may be > misremembering the reconsittution protocol solutions and would be happy to > get better information from Jim or Zaw-Sing if they are still around. I > think the tests I remember were done in 1982. Charlie Brown was involved as > an Air Force officer at the time. > > Vint > > > On Mon, Feb 18, 2019 at 11:18 PM Jack Haverty wrote: > >> I vaguely remember being at a meeting sometime in the mid-80s. Some >> government/military/contractor site, but can't remember where. It was a >> large (15 or 20) group of people, none of whom I knew. They were using >> lots of jargon I didn't recognize too. I had come in a bit late. >> >> One of the terms that cropped up was "New Dets Per Second". I knew what >> bits/second were, and kilobits/sec., and similar networky things, but had >> never heard "New Dets Per Second". >> >> After a while, the meaning became clear from context.... It was actually >> "NuDets/Second", shorthand for "Nuclear Detonations Per Second". >> >> I then finally realized I was in the wrong meeting. >> >> So someone was thinking about such things... >> >> /Jack >> On 2/18/19 6:10 PM, Barbara Denny wrote: >> >> I don't remember Radia Perlman's ideas for supporting network partitioning >> and coalescing. >> >> SRI did have a project where we did a few experiments at SAC demonstrating >> a solution to this problem using the ARPAnet and Packet Radio networks. We >> did go out to Offutt for demonstrations using their aircraft. This was in >> the mid 80's. I also think I may have given a demonstration of the >> protocols during IETF 4 at SRI to a few people. >> >> I believe Zaw-Sing Su and Jim Mathis worked on the design of the >> Reconstitution Protocols. I took part in the development and demonstration >> . Mark Lewis also participated in the project as a developer and maybe more >> since I was not part of the project initially. I am pretty sure there was >> a paper at MILCOM about this work. >> >> I believe there was also a RFP in this same time frame asking for >> solutions to islands of connectivity that may happen as a result of >> military conflict. I worked on the SRI proposal at least twice if my >> memory is correct. (I think there may have been a protest to the original >> award so that is why the second proposal.) I don't remember if this project >> was ever awarded to anyone. >> >> barbara >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jack at 3kitty.org Wed Mar 27 12:47:13 2019 From: jack at 3kitty.org (Jack Haverty) Date: Wed, 27 Mar 2019 12:47:13 -0700 Subject: [ih] Internet History - from Community to Big Tech? Message-ID: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> An interesting perspective on Internet History landed on my screen today: https://www.forbes.com/sites/cognitiveworld/2019/03/15/society-desperately-needs-an-alternative-web One premise of the article (and the underlying ones it references, especially Chris Dixon's article) is that the technical evolution of the Internet has gone through stages.? The early stage was one in which the "Internet Community" drove the development of the open protocols used pervasively.?? The second stage saw the "Big Tech" companies take over, building on top of, and sometimes replacing, the earlier open protocols.? The next stage, now emerging, sees governments and regulations appear to (try to) exert some level of control on how the technology affects society. This view struck a chord with my personal experience over those first two stages.? For example, back in the 70s/80s we had electronic mail of several kinds, mostly interconnected.? People on SMTP-mail, UUCP, Compuserve, MCIMail, etc. could communicate, if perhaps awkwardly.? Today, I know people who have their mailboxes on SMTP-mail, and Facebook, LinkedIn, Instagram, and even game platforms, and there are many more.? But mostly they can't inter-communicate, and I need an account on each one, and need to log in to each to read and send electronic mail in each walled garden. All of these now constitute what people call "The Internet." In the discussions on this list, I've mostly seen a historical view of the Internet from the "Internet Community" perspective - i.e., the genealogy of protocols documented in RFCs, IENs et al and driven by organizations such as IETF, etc. But in the actual "Internet" I'm wired into, I see a very different world.? Mysterious protocols are in use to do something unknowable because they're secret.?? Protocols I see in the RFCs as "Internet Standards" aren't always the ones that are actually used in the real world (e.g., email). This experience seems to match the notion of the two "stages" of the Internet, where the technical development of the running hardware and software moved from the "Internet Community" of IETF et al into the Engineering departments of the Big Tech companies.? I spent a good bit of my time both in "Stage One" and "Stage Two", but haven't seen much written about Stage Two events and experiences, or about how organizations such as IETF have changed. It seems like that transition was an important part of Internet History, but when it happened, how, who, why, etc., aren't discussed much. What do you think.....? /Jack Haverty From karl at cavebear.com Wed Mar 27 14:15:21 2019 From: karl at cavebear.com (Karl Auerbach) Date: Wed, 27 Mar 2019 14:15:21 -0700 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> Message-ID: There was perhaps another stage, one that ran in parallel to the others. I am thinking of things like the Air Force ULANA (Unified Local Area Network Architecture) effort during the mid-to-late 1980's as well as the Interop trade show network (which existed separate from the vendor stuff at the shows) that had its heyday from about 1987 through the early 2000's (it's still going on, but it's not what it once was.) https://ieeexplore.ieee.org/document/4794963 > https://books.google.com/books?id=4hwEAAAAMBAJ&pg=PA4&lpg=PA4&dq=ULANA+air+force&source=bl&ots=L26lgcBVRj&sig=ACfU3U1yxuJrO5uyVc9hKSkxn0hQkdJnYQ&hl=en&sa=X&ved=2ahUKEwjPwo6tmaPhAhXFsJ4KHcxwCF8Q6AEwBXoECAgQAQ#v=onepage&q=ULANA%20air%20force&f=false By-the-way, I worked with the TRW ULANA team (btw, we won, but the award was protested.) ULANA was, at the time, a huge effort to bring commercial off the shelf (COTS) material into a cohesive and interoperable set of parts that could largely be simply purchased and plugged together (user configuration required, of course.) It covered everything from wiring to routers to desktop machines to workstations to large mainframes. It's scope was both local and long-haul. In other words, everything. That project put energy into things like the John Romkey's "Packet Driver" idea for a universal way to add device drivers to PC-DoS machines. John wrote the first packet driver - I think it was for a 3COM NIC. I did the second for the TRW ethernet card (based on Intel's then very flakey NIC chips) we did for the project. And Russ Nelson took it further with his Crynwr packet driver collection. (Russ deserves a pair of Internet angel wings for his packet driver work.) The result was that all of the TCP stack venders for PC's - FTP Software, Beame and Whiteside, WRQ, NRC, Netmanage, etc were freed from the burden of writing device driver code. That significantly enhanced the spread of TCP/IP based PCs in the years before Microsoft squished everything when they came out with the built-in stack - but even they used the notion of a plug in driver (which they called NDIS). And ULANA was one of the early customers for companies that were then in the literal garage stage, like Cisco. ULANA also built a fire under the notion that networks needed to be monitored and managed - that was the era of ideas like HEMS, CMOT, and SNMP. (It was also the era of OSI - which ULANA largely rejected in favor of TCP/IP based networking.) Overall, the ULANA project forced a lot of attention onto the notion of TCP/IP interoperability. That notion later was picked up by the first decade of Interop trade show networks (and many of us from the ULANA project were involved in the design and deployment of the yearly and then bi-yearly Interop net.) (I would also suggest that the metronome effect of the Interop trade shows created an intense pressure on vendors to improve products and pay serious attention to compatibility with other vendors.) --karl-- From mfidelman at meetinghouse.net Wed Mar 27 14:46:45 2019 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Wed, 27 Mar 2019 17:46:45 -0400 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> Message-ID: <42e9de13-9848-f016-fde9-7349ea56567e@meetinghouse.net> I think that the change started the day the Internet opened to the public (1992) and folks looked to the Internet as a commercial opportunity, and secondarily as a service delivery opportunity (e.g., for Government agencies). By and large, my sense is that three things have happened in parallel: 1. A lot of the traditional Internet remains, and continues to be used as built - business email (including academic, non-profit, government), academic uses (epublishing, web sites, library access, etc.), lots and lots of email lists, a vehicle for collaboration (e.g., open source projects).? About the only thing that's gone out of style is USENET - which has largely been supplanted by things like Facebook.? IRC seems to remain, but lots of traffic has moved to other things like Slack. 2. Spam, and mass media!? Not unlike our postal mail, and telephones, lots and lots of crap has been added to the mix - raising the noise level, and requiring our attention just to sort it out and throw it away. 3. New services introduced commercially.? This is the area I worry about the most - all these new services that have proprietary interfaces, recreating a world of walled gardens. That kind of gets in the way of the Internet as a common vehicle for "the community."? We're all getting sliced and diced.? Not a good thing for large scale collaboration. Miles Fidelman On 3/27/19 3:47 PM, Jack Haverty wrote: > An interesting perspective on Internet History landed on my screen today: > > https://www.forbes.com/sites/cognitiveworld/2019/03/15/society-desperately-needs-an-alternative-web > > One premise of the article (and the underlying ones it references, > especially Chris Dixon's article) is that the technical evolution of the > Internet has gone through stages.? The early stage was one in which the > "Internet Community" drove the development of the open protocols used > pervasively.?? The second stage saw the "Big Tech" companies take over, > building on top of, and sometimes replacing, the earlier open > protocols.? The next stage, now emerging, sees governments and > regulations appear to (try to) exert some level of control on how the > technology affects society. > > This view struck a chord with my personal experience over those first > two stages.? For example, back in the 70s/80s we had electronic mail of > several kinds, mostly interconnected.? People on SMTP-mail, UUCP, > Compuserve, MCIMail, etc. could communicate, if perhaps awkwardly. > Today, I know people who have their mailboxes on SMTP-mail, and > Facebook, LinkedIn, Instagram, and even game platforms, and there are > many more.? But mostly they can't inter-communicate, and I need an > account on each one, and need to log in to each to read and send > electronic mail in each walled garden. > > All of these now constitute what people call "The Internet." > > In the discussions on this list, I've mostly seen a historical view of > the Internet from the "Internet Community" perspective - i.e., the > genealogy of protocols documented in RFCs, IENs et al and driven by > organizations such as IETF, etc. > > But in the actual "Internet" I'm wired into, I see a very different > world.? Mysterious protocols are in use to do something unknowable > because they're secret.?? Protocols I see in the RFCs as "Internet > Standards" aren't always the ones that are actually used in the real > world (e.g., email). > > This experience seems to match the notion of the two "stages" of the > Internet, where the technical development of the running hardware and > software moved from the "Internet Community" of IETF et al into the > Engineering departments of the Big Tech companies.? I spent a good bit > of my time both in "Stage One" and "Stage Two", but haven't seen much > written about Stage Two events and experiences, or about how > organizations such as IETF have changed. > > It seems like that transition was an important part of Internet History, > but when it happened, how, who, why, etc., aren't discussed much. > > What do you think.....? > > /Jack Haverty > > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From mfidelman at meetinghouse.net Wed Mar 27 17:50:21 2019 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Wed, 27 Mar 2019 20:50:21 -0400 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <42e9de13-9848-f016-fde9-7349ea56567e@meetinghouse.net> Message-ID: On 3/27/19 7:26 PM, Dave Crocker wrote: > On 3/27/2019 2:46 PM, Miles Fidelman wrote: >> I think that the change started the day the Internet opened to the >> public (1992) and folks looked to the Internet as a commercial >> opportunity, and secondarily as a service delivery opportunity (e.g., >> for Government agencies). > > > Taking that model, the date is more like latter 1980s.? By way of > example, at least 3 vendors were selling proprietary (and, of course, > incompatible) versions of NetBios over TCP. > > But were there equivalent 'proprietary' protocols over the Internet > (or, arguably, Arpanet) before that.? I think there were, though my > brain isn't producing an example. > > Going back towards the Arpanet, it becomes challenging to distinguish > "proprietary" from just run-of-the-mill experimentation.? Ray > Tomlinson used a 'proprietary' protocol for doing networked email; the > FTP-based email commands came later. But his version got widespread > use, because Tenex was popular amongst Arpanet sites. The difference was that, in the academic days, the overall goal of the Internet was to support information sharing and collaboration.? Competing approaches, toward the goal of interoperability was one thing.? But with commercialization, the goal became building & segregating audiences - where interoperability became disadvantageous to. Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From richard at bennett.com Wed Mar 27 18:28:31 2019 From: richard at bennett.com (Richard Bennett) Date: Wed, 27 Mar 2019 19:28:31 -0600 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> Message-ID: I remember packet drivers - lots of data copying going on, gave us a reference point to improve on in years to come. RB > On Mar 27, 2019, at 3:15 PM, Karl Auerbach wrote: > > > There was perhaps another stage, one that ran in parallel to the others. > > I am thinking of things like the Air Force ULANA (Unified Local Area > Network Architecture) effort during the mid-to-late 1980's as well as > the Interop trade show network (which existed separate from the vendor > stuff at the shows) that had its heyday from about 1987 through the > early 2000's (it's still going on, but it's not what it once was.) > > https://ieeexplore.ieee.org/document/4794963 > >> https://books.google.com/books?id=4hwEAAAAMBAJ&pg=PA4&lpg=PA4&dq=ULANA+air+force&source=bl&ots=L26lgcBVRj&sig=ACfU3U1yxuJrO5uyVc9hKSkxn0hQkdJnYQ&hl=en&sa=X&ved=2ahUKEwjPwo6tmaPhAhXFsJ4KHcxwCF8Q6AEwBXoECAgQAQ#v=onepage&q=ULANA%20air%20force&f=false > > By-the-way, I worked with the TRW ULANA team (btw, we won, but the award > was protested.) > > ULANA was, at the time, a huge effort to bring commercial off the shelf > (COTS) material into a cohesive and interoperable set of parts that > could largely be simply purchased and plugged together (user > configuration required, of course.) It covered everything from wiring > to routers to desktop machines to workstations to large mainframes. > It's scope was both local and long-haul. In other words, everything. > > That project put energy into things like the John Romkey's "Packet > Driver" idea for a universal way to add device drivers to PC-DoS > machines. John wrote the first packet driver - I think it was for a > 3COM NIC. I did the second for the TRW ethernet card (based on Intel's > then very flakey NIC chips) we did for the project. And Russ Nelson took > it further with his Crynwr packet driver collection. (Russ deserves a > pair of Internet angel wings for his packet driver work.) The result > was that all of the TCP stack venders for PC's - FTP Software, Beame and > Whiteside, WRQ, NRC, Netmanage, etc were freed from the burden of > writing device driver code. That significantly enhanced the spread of > TCP/IP based PCs in the years before Microsoft squished everything when > they came out with the built-in stack - but even they used the notion of > a plug in driver (which they called NDIS). > > And ULANA was one of the early customers for companies that were then in > the literal garage stage, like Cisco. > > ULANA also built a fire under the notion that networks needed to be > monitored and managed - that was the era of ideas like HEMS, CMOT, and > SNMP. (It was also the era of OSI - which ULANA largely rejected in > favor of TCP/IP based networking.) > > Overall, the ULANA project forced a lot of attention onto the notion of > TCP/IP interoperability. That notion later was picked up by the first > decade of Interop trade show networks (and many of us from the ULANA > project were involved in the design and deployment of the yearly and > then bi-yearly Interop net.) > > (I would also suggest that the metronome effect of the Interop trade > shows created an intense pressure on vendors to improve products and pay > serious attention to compatibility with other vendors.) > > --karl-- > > > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. ? Richard Bennett High Tech Forum Founder Ethernet & Wi-Fi standards co-creator Internet Policy Consultant -------------- next part -------------- An HTML attachment was scrubbed... URL: From jack at 3kitty.org Wed Mar 27 19:04:59 2019 From: jack at 3kitty.org (Jack Haverty) Date: Wed, 27 Mar 2019 19:04:59 -0700 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <42e9de13-9848-f016-fde9-7349ea56567e@meetinghouse.net> Message-ID: <3a70378f-6983-d645-2540-e2eac31d9226@3kitty.org> Or said another way:? "Interoperability is the fertile ground from which walled gardens sprout." /Jack Haverty On 3/27/19 5:50 PM, Miles Fidelman wrote: > The difference was that, in the academic days, the overall goal of the > Internet was to support information sharing and collaboration.? > Competing approaches, toward the goal of interoperability was one > thing.? But with commercialization, the goal became building & > segregating audiences - where interoperability became disadvantageous to. From jack at 3kitty.org Wed Mar 27 19:16:25 2019 From: jack at 3kitty.org (Jack Haverty) Date: Wed, 27 Mar 2019 19:16:25 -0700 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> Message-ID: <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> On 3/27/19 2:15 PM, Karl Auerbach wrote: > The result > was that all of the TCP stack venders for PC's - FTP Software, Beame and > Whiteside, WRQ, NRC, Netmanage, etc were freed from the burden of > writing device driver code. Interesting...? In that era (circa 1990-2) I was at Oracle, and I recall the pain of having to get our "application" software (i.e., databases) to run over at least 30 separate implementations of TCP just for PCs. The "packet driver" standardization may have made it easier for all those people to write their TCP stacks -- but there was no such standardization at the next level - the APIs that allowed an app to use those stacks.? So we needed different code for each TCP stack. TCP of course offered standardization -- but within its own walled garden, competing with OSI, DecNet, SNA, Vines, XNS, Netware, etc.? TCP's walled garden won the battle and all the others died out. I wonder if that is where the boundary starts between interoperability and walled gardens - i.e., where people take advantage of the "lower" uniformity brought by some standard (whether in spec or in code), but fail to coordinate standardization at the level "above" them, where they present their services to the next guy up.? By maintaining uniqueness, they hope their walled garden will be the one to thrive. The history of all those walled gardens and boundaries seems like an important part of Internet History. /Jack Haverty From mfidelman at meetinghouse.net Thu Mar 28 06:13:14 2019 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Thu, 28 Mar 2019 09:13:14 -0400 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> Message-ID: On 3/27/19 10:16 PM, Jack Haverty wrote: > TCP of course offered standardization -- but within its own walled > garden, competing with OSI, DecNet, SNA, Vines, XNS, Netware, etc. > TCP's walled garden won the battle and all the others died out. > > I wonder if that is where the boundary starts between interoperability > and walled gardens - i.e., where people take advantage of the "lower" > uniformity brought by some standard (whether in spec or in code), but > fail to coordinate standardization at the level "above" them, where they > present their services to the next guy up.? By maintaining uniqueness, > they hope their walled garden will be the one to thrive. > > The history of all those walled gardens and boundaries seems like an > important part of Internet History. Interesting point.? I've always thought of walled gardens from the top down; with email being the clearest example.? First we had lots of email systems that didn't talk to each other - predating the ARPANET (e.g., mail on an individual time sharing system).? In the early days, we had vendor specific email, running within enterprises.? ARPANET bridged those - but only for a limited community.? Meanwhile, folks like AOL & Compuserve competed on how many people could be reached on their platform (not unlike the early telephone days - when a business had to have a dozen phones on their desk, from a dozen telcos, to allow all their customers to reach them).? And then, a little bit before the Internet fully opened to the public, we started to see email connections (e.g., Compuserve exchanged email with the Internet). Now, email is starting to go the other way; what with various "secure" email systems promulgated by health care organizations, banks, and what not. Calendars are the one that really bug me, though.? We have interoperable specs, they actually kind of worked - until Google stopped supporting them (funny how Microsoft Outlook DOES support them).? It makes scheduling a meeting a royal pain. Looking at things from the bottom up provides an interesting alternative view. Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From wayne at playaholic.com Thu Mar 28 14:48:32 2019 From: wayne at playaholic.com (Wayne Hathaway) Date: Thu, 28 Mar 2019 17:48:32 -0400 Subject: [ih] Where was the first host name table? In-Reply-To: <58BD6A83-07F7-4533-8E04-1658344845A9@comcast.net> References: <58BD6A83-07F7-4533-8E04-1658344845A9@comcast.net> Message-ID: <1553809712.gpp1f67s0ks444ks@hostingemail.digitalspace.net> On Mon, 11 Mar 2019 23:35:17 -0400, John Day wrote: >> Again we have another example of ?the effect of T. S. Eliot on Shakespeare.? >> >> It was a short table. Everyone?s implementation did their own table. I think the official list was at the NIC. That was before the network map would no longer fit on one 8.5 x 11 sheet of paper and you couldn?t print who was up and who was down from a well-known port at the NMC. True it was short, but getting bigger all the time. I was responsible for Ames-67, a TSS/360 timesharing system at NASA Ames Research Center, and having never heard of daemon processes or anything but getting more and more tired of manually trying to keep our host table in sync with the NIC, I implemented something to automatically connect to the NIC once a week (4:00am on Monday IIRC) and download the latest table.? Primitive but effective, kinda like everything we did back then. wayne ? From karl at cavebear.com Thu Mar 28 15:33:20 2019 From: karl at cavebear.com (Karl Auerbach) Date: Thu, 28 Mar 2019 15:33:20 -0700 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> Message-ID: <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> On 3/27/19 7:16 PM, Jack Haverty wrote: > The "packet driver" standardization may have made it easier for all > those people to write their TCP stacks -- but there was no such > standardization at the next level - the APIs that allowed an app to use > those stacks.? So we needed different code for each TCP stack. The Winsock API evolved on the top side of many of the TCP stacks. Winsock was a distant relative of the Unix socket API (and when I say "distant" I mean "they could almost see one another on a clear day with a good telescope"). If I remember correctly there was a vendor consortium to work on making sure that Winsock was clear and solid. I do remember running some Winsock interoperability bake-offs. > I wonder if that is where the boundary starts between interoperability > and walled gardens At the Interop shows, especially in the earlier days, we (the team that built and ran the show net) really beat up on vendors that were not interoperable. I remember at least one case where we simply unplugged a router/switch vendor because they were not playing nice. We always pre-built and pre-tested the main show network (45.x.x.x/8) in a warehouse a couple of months before the show. That way we had everything relatively solid before we loaded up the trucks (and we filled a lot of trucks - I remember once we filled 43 large semitrailers - and that was just for our own gear, not the vendors'.) And wow, did we ever find some pathological non-interoperation. But sometimes the cause was relatively innocent - as one instance having to do with a difference of interpretation regarding the forwarding of IP multicast packets between Cisco and Wellfleet routers that ended up causing us an infinite ethernet frame loop. And once our FDDI expert - Merike Kaeo - found a specification flaw in FDDI physical layer stuff: The various vendors came up with a fix on the spot and were blasting new firmware into PROMs in their hotel rooms. - i.e., where people take advantage of the "lower" > uniformity brought by some standard (whether in spec or in code), but > fail to coordinate standardization at the level "above" them, where they > present their services to the next guy up.? By maintaining uniqueness, > they hope their walled garden will be the one to thrive. I recently had someone confirm a widely held belief that Sun Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces to have a winning bias against Ethernet machines that adhered to the IEEE/DIX ethernet timer values. Those of us who tended to work with networked PC platforms were well aware of the effect of putting a Sun onto the same Ethernet: what had worked before stopped working, but the Suns all chatted among themselves quite happily. And FTP Software used to put its license key information in the part of Ethernet frames between the end of an ARP and the end of the data of the Ethernet frame. That caused a lot of strange side effects. (One can still send a lot of IP stacks into death spirals by putting an IPv4/v6 packet into an Ethernet frame that is larger than the minimum needed to hold the IP packet - a lot of deployed code still incorrectly uses the received frame size to impute the length of the IP packet rather than looking at the IP header.) And FTP software also realized that with IP fragmentation the receiver really does not know how big a buffer will ultimately be required until the last fragment arrives. So they altered their IP stack to send the last fragment first. That had the effect of causing all of their competitor Netmanage stacks to crash when they got a last-fragement-first. --karl-- From brian.e.carpenter at gmail.com Thu Mar 28 17:17:47 2019 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 29 Mar 2019 13:17:47 +1300 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> Message-ID: On 29-Mar-19 11:33, Karl Auerbach wrote: > > On 3/27/19 7:16 PM, Jack Haverty wrote: > >> The "packet driver" standardization may have made it easier for all >> those people to write their TCP stacks -- but there was no such >> standardization at the next level - the APIs that allowed an app to use >> those stacks.? So we needed different code for each TCP stack. > > The Winsock API evolved on the top side of many of the TCP stacks. > Winsock was a distant relative of the Unix socket API (and when I say > "distant" I mean "they could almost see one another on a clear day with > a good telescope"). If I remember correctly there was a vendor > consortium to work on making sure that Winsock was clear and solid. I > do remember running some Winsock interoperability bake-offs. Yes, all of that. But it remains a daily problem that Winsock2 is incompatible with the POSIX socket API, and there are some subtle and not-so-subtle discrepancies that make software portability a real problem to this day, when you are trying to do anything even slightly off the beaten track. >> I wonder if that is where the boundary starts between interoperability >> and walled gardens No, I don't think so, not any more. But as far as I'm concerned that isn't a history topic... Oh, all right, I mean: https://tools.ietf.org/html/draft-carpenter-limited-domains Brian > > At the Interop shows, especially in the earlier days, we (the team that > built and ran the show net) really beat up on vendors that were not > interoperable. I remember at least one case where we simply unplugged a > router/switch vendor because they were not playing nice. > > We always pre-built and pre-tested the main show network (45.x.x.x/8) in > a warehouse a couple of months before the show. That way we had > everything relatively solid before we loaded up the trucks (and we > filled a lot of trucks - I remember once we filled 43 large semitrailers > - and that was just for our own gear, not the vendors'.) > > And wow, did we ever find some pathological non-interoperation. But > sometimes the cause was relatively innocent - as one instance having to > do with a difference of interpretation regarding the forwarding of IP > multicast packets between Cisco and Wellfleet routers that ended up > causing us an infinite ethernet frame loop. And once our FDDI expert - > Merike Kaeo - found a specification flaw in FDDI physical layer stuff: > The various vendors came up with a fix on the spot and were blasting new > firmware into PROMs in their hotel rooms. > > > - i.e., where people take advantage of the "lower" >> uniformity brought by some standard (whether in spec or in code), but >> fail to coordinate standardization at the level "above" them, where they >> present their services to the next guy up.? By maintaining uniqueness, >> they hope their walled garden will be the one to thrive. > > I recently had someone confirm a widely held belief that Sun > Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces > to have a winning bias against Ethernet machines that adhered to the > IEEE/DIX ethernet timer values. Those of us who tended to work with > networked PC platforms were well aware of the effect of putting a Sun > onto the same Ethernet: what had worked before stopped working, but the > Suns all chatted among themselves quite happily. > > And FTP Software used to put its license key information in the part of > Ethernet frames between the end of an ARP and the end of the data of the > Ethernet frame. That caused a lot of strange side effects. (One can > still send a lot of IP stacks into death spirals by putting an IPv4/v6 > packet into an Ethernet frame that is larger than the minimum needed to > hold the IP packet - a lot of deployed code still incorrectly uses the > received frame size to impute the length of the IP packet rather than > looking at the IP header.) > > And FTP software also realized that with IP fragmentation the receiver > really does not know how big a buffer will ultimately be required until > the last fragment arrives. So they altered their IP stack to send the > last fragment first. That had the effect of causing all of their > competitor Netmanage stacks to crash when they got a last-fragement-first. > > --karl-- > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > From agmalis at gmail.com Fri Mar 29 09:40:34 2019 From: agmalis at gmail.com (Andrew G. Malis) Date: Fri, 29 Mar 2019 17:40:34 +0100 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> Message-ID: Brian, I took this as an opportunity to re-read your draft, and I have another example for section 3, well-managed wide area networks run by service providers for private enterprise business services such as layer 2 (Ethernet, etc.) point-to-point pseudowires, multipoint layer 2 Ethernet VPNs using VPLS or EVPN, and layer 3 IP VPNs. These are generally characterized by service level agreements for availability and packet loss. These are different from your #9 in that they mostly run over MPLS infrastructures and the requirements for these services are well-defined by the IETF. Cheers, Andy On Fri, Mar 29, 2019 at 1:37 AM Brian E Carpenter < brian.e.carpenter at gmail.com> wrote: > On 29-Mar-19 11:33, Karl Auerbach wrote: > > > > On 3/27/19 7:16 PM, Jack Haverty wrote: > > > >> The "packet driver" standardization may have made it easier for all > >> those people to write their TCP stacks -- but there was no such > >> standardization at the next level - the APIs that allowed an app to use > >> those stacks. So we needed different code for each TCP stack. > > > > The Winsock API evolved on the top side of many of the TCP stacks. > > Winsock was a distant relative of the Unix socket API (and when I say > > "distant" I mean "they could almost see one another on a clear day with > > a good telescope"). If I remember correctly there was a vendor > > consortium to work on making sure that Winsock was clear and solid. I > > do remember running some Winsock interoperability bake-offs. > > Yes, all of that. But it remains a daily problem that Winsock2 is > incompatible with the POSIX socket API, and there are some subtle > and not-so-subtle discrepancies that make software portability a > real problem to this day, when you are trying to do anything > even slightly off the beaten track. > > >> I wonder if that is where the boundary starts between interoperability > >> and walled gardens > > No, I don't think so, not any more. But as far as I'm concerned that > isn't a history topic... Oh, all right, I mean: > https://tools.ietf.org/html/draft-carpenter-limited-domains > > Brian > > > > > At the Interop shows, especially in the earlier days, we (the team that > > built and ran the show net) really beat up on vendors that were not > > interoperable. I remember at least one case where we simply unplugged a > > router/switch vendor because they were not playing nice. > > > > We always pre-built and pre-tested the main show network (45.x.x.x/8) in > > a warehouse a couple of months before the show. That way we had > > everything relatively solid before we loaded up the trucks (and we > > filled a lot of trucks - I remember once we filled 43 large semitrailers > > - and that was just for our own gear, not the vendors'.) > > > > And wow, did we ever find some pathological non-interoperation. But > > sometimes the cause was relatively innocent - as one instance having to > > do with a difference of interpretation regarding the forwarding of IP > > multicast packets between Cisco and Wellfleet routers that ended up > > causing us an infinite ethernet frame loop. And once our FDDI expert - > > Merike Kaeo - found a specification flaw in FDDI physical layer stuff: > > The various vendors came up with a fix on the spot and were blasting new > > firmware into PROMs in their hotel rooms. > > > > > > - i.e., where people take advantage of the "lower" > >> uniformity brought by some standard (whether in spec or in code), but > >> fail to coordinate standardization at the level "above" them, where they > >> present their services to the next guy up. By maintaining uniqueness, > >> they hope their walled garden will be the one to thrive. > > > > I recently had someone confirm a widely held belief that Sun > > Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces > > to have a winning bias against Ethernet machines that adhered to the > > IEEE/DIX ethernet timer values. Those of us who tended to work with > > networked PC platforms were well aware of the effect of putting a Sun > > onto the same Ethernet: what had worked before stopped working, but the > > Suns all chatted among themselves quite happily. > > > > And FTP Software used to put its license key information in the part of > > Ethernet frames between the end of an ARP and the end of the data of the > > Ethernet frame. That caused a lot of strange side effects. (One can > > still send a lot of IP stacks into death spirals by putting an IPv4/v6 > > packet into an Ethernet frame that is larger than the minimum needed to > > hold the IP packet - a lot of deployed code still incorrectly uses the > > received frame size to impute the length of the IP packet rather than > > looking at the IP header.) > > > > And FTP software also realized that with IP fragmentation the receiver > > really does not know how big a buffer will ultimately be required until > > the last fragment arrives. So they altered their IP stack to send the > > last fragment first. That had the effect of causing all of their > > competitor Netmanage stacks to crash when they got a > last-fragement-first. > > > > --karl-- > > _______ > > internet-history mailing list > > internet-history at postel.org > > http://mailman.postel.org/mailman/listinfo/internet-history > > Contact list-owner at postel.org for assistance. > > > > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnu at toad.com Fri Mar 29 17:40:53 2019 From: gnu at toad.com (John Gilmore) Date: Fri, 29 Mar 2019 17:40:53 -0700 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> Message-ID: <30603.1553906453@hop.toad.com> Karl Auerbach wrote: > I recently had someone confirm a widely held belief that Sun > Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces > to have a winning bias against Ethernet machines that adhered to the > IEEE/DIX ethernet timer values. Those of us who tended to work with > networked PC platforms were well aware of the effect of putting a Sun > onto the same Ethernet: what had worked before stopped working, but > the Suns all chatted among themselves quite happily. Are we talking about 10 Mbit Ethernet, or something later? I worked at Sun back then. Sun was shipping products with Ethernet before the IBM PC even existed. Sun products used standard Ethernet chips. Some of those chips were super customizable via internal registers (I have a T1 card that uses an Ethernet chip with settings that let it talk telco T1/DS1 protocol!), but Sun always set them to meet the standard specs. What evidence is there of any non-standard settings? What Sun did differently was that we tuned the implementation so it could actually send and receive back-to-back packets, at the minimum specified inter-packet gaps. By building both the hardware and the software ourselves (like Apple today, and unlike Microsoft), we were able to work out all the kinks to maximize performance. We could improve everything: software drivers, interrupt latencies, TCP/IP stacks, DMA bus arbitration overhead. Sun was the first to do production shared disk-drive access over Ethernet, to reduce the cost of our "diskless" workstations. In sending 4Kbyte filesystem blocks among client and server, we sent an IP-fragmented 4K+ UDP datagram in three BACK-TO-BACK Ethernet packets. Someone, I think it was Van Jacobson, did some early work on maximizing Ethernet thruput, and reported it at USENIX conferences. His observation was that to get maximal thruput, you needed 3 things to be happening absolutely simultaneously: the sender processing & queueing the next packet; the Ethernet wire moving the current packet; the receiver dequeueing and processing the previous packet. If any of these operations took longer than the others, then that would be the limiting factor in the thruput. This applies to half duplex operation (only one side transmits at a time); the end node processing requirement doubles if you run full duplex data in both directions (on more modern Ethernets that support that) . Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his favorite things to work on was network performance. Here's one of his signature blocks from 1996: Yow! 11.26 MB/s remote host TCP bandwidth & //// 199 usec remote TCP latency over 100Mb/s //// ethernet. Beat that! //// -----------------------------------------////__________ o David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< My guess is that the ISA cards of the day had never even *seen* back to back Ethernet packets (with only the 9.6 uSec interframe spacing between them), so of course they weren't tested to be able to handle them. The ISA bus was slow, and the PC market was cheap, and RAM was expensive, so most cards just had one or two packet buffers. And if the CPU didn't immediately grab one of those received buffers, then the next packet would get dropped for lack of a buffer to put it in. In sending, you had to have the second buffer queued long before the inter-packet gap, or you wouldn't send with minimum packet spacing on the wire. Most PC operating systems couldn't do that. And if your card was slower than the standard 9.6usec inter-packet gap after sensing carrier, then any Sun waiting to transmit would beat your card to the wire, deferring your card's transmission. You may have also been seeing the "Channel capture effect"; see: https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect John From jack at 3kitty.org Fri Mar 29 18:57:40 2019 From: jack at 3kitty.org (Jack Haverty) Date: Fri, 29 Mar 2019 18:57:40 -0700 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <30603.1553906453@hop.toad.com> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> Message-ID: <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> I can confirm that there was at least one Unix vendor that violated the Ethernet specs (10mb/s).? I was at Oracle in the early 90s, where we had at least one of every common computer so that we could test software. While testing, we noticed that when one particular type of machine was active doing a long bulk transfer, all of the other traffic on our LAN slowed to a crawl.?? I was a hardware guy in a software universe, but I managed to find one other hardware type, and we scrounged up an oscilloscope, and then looked closely at the wire and at the spec. I don't remember the details, but there was some timer that was supposed to have a certain minimum value and that Unix box was consistently violating it.? So it could effectively seize the LAN for as long as it had traffic. Sorry, I can't remember which vendor it was.? It might have been Sun, or maybe one specific model/vintage, since we had a lot of Sun equipment but hadn't noticed the problem before. I suspect there's a lot of such "standards" that are routinely violated in the network.?? Putting it on paper and declaring it mandatory doesn't make it true.? Personally I never saw much rigorous certification testing or enforcement (not just of Ethernet), and the general "robustness" designs can hide bad behavior. /Jack Haverty On 3/29/19 5:40 PM, John Gilmore wrote: > Karl Auerbach wrote: >> I recently had someone confirm a widely held belief that Sun >> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces >> to have a winning bias against Ethernet machines that adhered to the >> IEEE/DIX ethernet timer values. Those of us who tended to work with >> networked PC platforms were well aware of the effect of putting a Sun >> onto the same Ethernet: what had worked before stopped working, but >> the Suns all chatted among themselves quite happily. > Are we talking about 10 Mbit Ethernet, or something later? > > I worked at Sun back then. Sun was shipping products with Ethernet > before the IBM PC even existed. Sun products used standard Ethernet > chips. Some of those chips were super customizable via internal > registers (I have a T1 card that uses an Ethernet chip with settings > that let it talk telco T1/DS1 protocol!), but Sun always set them to > meet the standard specs. What evidence is there of any non-standard > settings? > > What Sun did differently was that we tuned the implementation so it > could actually send and receive back-to-back packets, at the minimum > specified inter-packet gaps. By building both the hardware and the > software ourselves (like Apple today, and unlike Microsoft), we were > able to work out all the kinks to maximize performance. We could > improve everything: software drivers, interrupt latencies, TCP/IP > stacks, DMA bus arbitration overhead. Sun was the first to do > production shared disk-drive access over Ethernet, to reduce the cost of > our "diskless" workstations. In sending 4Kbyte filesystem blocks among > client and server, we sent an IP-fragmented 4K+ UDP datagram in three > BACK-TO-BACK Ethernet packets. > > Someone, I think it was Van Jacobson, did some early work on maximizing > Ethernet thruput, and reported it at USENIX conferences. His > observation was that to get maximal thruput, you needed 3 things to be > happening absolutely simultaneously: the sender processing & queueing > the next packet; the Ethernet wire moving the current packet; the > receiver dequeueing and processing the previous packet. If any of these > operations took longer than the others, then that would be the limiting > factor in the thruput. This applies to half duplex operation (only one > side transmits at a time); the end node processing requirement doubles > if you run full duplex data in both directions (on more modern Ethernets > that support that) . > > Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his > favorite things to work on was network performance. Here's one of his > signature blocks from 1996: > > Yow! 11.26 MB/s remote host TCP bandwidth & //// > 199 usec remote TCP latency over 100Mb/s //// > ethernet. Beat that! //// > -----------------------------------------////__________ o > David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< > > My guess is that the ISA cards of the day had never even *seen* back to > back Ethernet packets (with only the 9.6 uSec interframe spacing between > them), so of course they weren't tested to be able to handle them. The > ISA bus was slow, and the PC market was cheap, and RAM was expensive, so > most cards just had one or two packet buffers. And if the CPU didn't > immediately grab one of those received buffers, then the next packet > would get dropped for lack of a buffer to put it in. In sending, you > had to have the second buffer queued long before the inter-packet gap, or > you wouldn't send with minimum packet spacing on the wire. Most PC > operating systems couldn't do that. And if your card was slower than > the standard 9.6usec inter-packet gap after sensing carrier, > then any Sun waiting to transmit would beat your card to the wire, > deferring your card's transmission. > > You may have also been seeing the "Channel capture effect"; see: > > https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect > > John > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. From ocl at gih.com Sat Mar 30 00:41:15 2019 From: ocl at gih.com (=?UTF-8?Q?Olivier_MJ_Cr=c3=a9pin-Leblond?=) Date: Sat, 30 Mar 2019 08:41:15 +0100 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> Message-ID: Dear Jack, I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000 compatible cards on the same network. Having started with Novell & 3COM cards, all on Coax, we found that we started getting timeouts when we added more cheap NE2000 compatible cards. Did the same thing with oscilloscopes/analysers and tweaked parameters to go around this problem. Warm regards, Olivier On 30/03/2019 02:57, Jack Haverty wrote: > I can confirm that there was at least one Unix vendor that violated the > Ethernet specs (10mb/s).? I was at Oracle in the early 90s, where we had > at least one of every common computer so that we could test software. > > While testing, we noticed that when one particular type of machine was > active doing a long bulk transfer, all of the other traffic on our LAN > slowed to a crawl.?? I was a hardware guy in a software universe, but I > managed to find one other hardware type, and we scrounged up an > oscilloscope, and then looked closely at the wire and at the spec. > > I don't remember the details, but there was some timer that was supposed > to have a certain minimum value and that Unix box was consistently > violating it.? So it could effectively seize the LAN for as long as it > had traffic. > > Sorry, I can't remember which vendor it was.? It might have been Sun, or > maybe one specific model/vintage, since we had a lot of Sun equipment > but hadn't noticed the problem before. > > I suspect there's a lot of such "standards" that are routinely violated > in the network.?? Putting it on paper and declaring it mandatory doesn't > make it true.? Personally I never saw much rigorous certification > testing or enforcement (not just of Ethernet), and the general > "robustness" designs can hide bad behavior. > > /Jack Haverty > > > On 3/29/19 5:40 PM, John Gilmore wrote: >> Karl Auerbach wrote: >>> I recently had someone confirm a widely held belief that Sun >>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces >>> to have a winning bias against Ethernet machines that adhered to the >>> IEEE/DIX ethernet timer values. Those of us who tended to work with >>> networked PC platforms were well aware of the effect of putting a Sun >>> onto the same Ethernet: what had worked before stopped working, but >>> the Suns all chatted among themselves quite happily. >> Are we talking about 10 Mbit Ethernet, or something later? >> >> I worked at Sun back then. Sun was shipping products with Ethernet >> before the IBM PC even existed. Sun products used standard Ethernet >> chips. Some of those chips were super customizable via internal >> registers (I have a T1 card that uses an Ethernet chip with settings >> that let it talk telco T1/DS1 protocol!), but Sun always set them to >> meet the standard specs. What evidence is there of any non-standard >> settings? >> >> What Sun did differently was that we tuned the implementation so it >> could actually send and receive back-to-back packets, at the minimum >> specified inter-packet gaps. By building both the hardware and the >> software ourselves (like Apple today, and unlike Microsoft), we were >> able to work out all the kinks to maximize performance. We could >> improve everything: software drivers, interrupt latencies, TCP/IP >> stacks, DMA bus arbitration overhead. Sun was the first to do >> production shared disk-drive access over Ethernet, to reduce the cost of >> our "diskless" workstations. In sending 4Kbyte filesystem blocks among >> client and server, we sent an IP-fragmented 4K+ UDP datagram in three >> BACK-TO-BACK Ethernet packets. >> >> Someone, I think it was Van Jacobson, did some early work on maximizing >> Ethernet thruput, and reported it at USENIX conferences. His >> observation was that to get maximal thruput, you needed 3 things to be >> happening absolutely simultaneously: the sender processing & queueing >> the next packet; the Ethernet wire moving the current packet; the >> receiver dequeueing and processing the previous packet. If any of these >> operations took longer than the others, then that would be the limiting >> factor in the thruput. This applies to half duplex operation (only one >> side transmits at a time); the end node processing requirement doubles >> if you run full duplex data in both directions (on more modern Ethernets >> that support that) . >> >> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his >> favorite things to work on was network performance. Here's one of his >> signature blocks from 1996: >> >> Yow! 11.26 MB/s remote host TCP bandwidth & //// >> 199 usec remote TCP latency over 100Mb/s //// >> ethernet. Beat that! //// >> -----------------------------------------////__________ o >> David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< >> >> My guess is that the ISA cards of the day had never even *seen* back to >> back Ethernet packets (with only the 9.6 uSec interframe spacing between >> them), so of course they weren't tested to be able to handle them. The >> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so >> most cards just had one or two packet buffers. And if the CPU didn't >> immediately grab one of those received buffers, then the next packet >> would get dropped for lack of a buffer to put it in. In sending, you >> had to have the second buffer queued long before the inter-packet gap, or >> you wouldn't send with minimum packet spacing on the wire. Most PC >> operating systems couldn't do that. And if your card was slower than >> the standard 9.6usec inter-packet gap after sensing carrier, >> then any Sun waiting to transmit would beat your card to the wire, >> deferring your card's transmission. >> >> You may have also been seeing the "Channel capture effect"; see: >> >> https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect >> >> John >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From karl at cavebear.com Sat Mar 30 00:47:53 2019 From: karl at cavebear.com (Karl Auerbach) Date: Sat, 30 Mar 2019 00:47:53 -0700 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <30603.1553906453@hop.toad.com> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> Message-ID: For certain those early PC Ethernet NICs were pretty awful - the original 3COM 3C501 card had only one buffer.? A TCP stack on it often missed quick ACKs from a faster peer (and with respect to those early PC's, pretty much every peer was faster.) But it wasn't long before things in the PC world got much better.? By the time ULANA rolled around in the mid 1980's Intel had put out its first generation of reasonably smart Ethernet chipsets - I wrote a driver using them.? They felt surprisingly like an old IBM 360/370 channel - one wrote a set of descriptors to do scatter/gather on chains of receives and transmits.? All of the hard work of dealing with the CSMA/CD system and back-to-back packets was in the Intel hardware - and all of the Ethernet access timers were in there as well. There were a lot of other interoperability problems in that era. That was a time of IEEE deciding that Ethernet needed SNAP headers and ISO/OSI was making everyone think of variable length addresses (such as NSAPs.)? There is a legacy from that - the framing of things carried on ethernet VLANs is still potentially excessively complicated and probably has driven at least as many network programmers into wall-banging frenzies as the CRLF vs LF vs LFCR vs NVT/Telnet and whitespace/tabbing conventions. Wasn't there also some disagreement over whether the IPv4 broadcast address was 0.0.0.0 (BSD) or 255.255.255.255 (everybody else)? ??? --karl-- From dave at taht.net Sat Mar 30 05:52:32 2019 From: dave at taht.net (Dave =?iso-8859-1?Q?T=E4ht?=) Date: Sat, 30 Mar 2019 12:52:32 +0000 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> Message-ID: <20190330125232.GA28169@mail.taht.net> The PCI DECCHIP tulip cards were :great: they had a 4 packet buffer (8K onboard) and could make line rate easily. We used them (eventually) in our first embedded linux wireless routers. http://the-edge.blogspot.com/2010/10/who-invented-embedded-linux-based.html On Sat, Mar 30, 2019 at 12:47:53AM -0700, Karl Auerbach wrote: > For certain those early PC Ethernet NICs were pretty awful - the > original 3COM 3C501 card had only one buffer.? A TCP stack on it often > missed quick ACKs from a faster peer (and with respect to those early > PC's, pretty much every peer was faster.) > > But it wasn't long before things in the PC world got much better.? By > the time ULANA rolled around in the mid 1980's Intel had put out its > first generation of reasonably smart Ethernet chipsets - I wrote a > driver using them.? They felt surprisingly like an old IBM 360/370 > channel - one wrote a set of descriptors to do scatter/gather on chains > of receives and transmits.? All of the hard work of dealing with the > CSMA/CD system and back-to-back packets was in the Intel hardware - and > all of the Ethernet access timers were in there as well. > > There were a lot of other interoperability problems in that era. That > was a time of IEEE deciding that Ethernet needed SNAP headers and > ISO/OSI was making everyone think of variable length addresses (such as > NSAPs.)? There is a legacy from that - the framing of things carried on > ethernet VLANs is still potentially excessively complicated and probably > has driven at least as many network programmers into wall-banging > frenzies as the CRLF vs LF vs LFCR vs NVT/Telnet and whitespace/tabbing > conventions. > > Wasn't there also some disagreement over whether the IPv4 broadcast > address was 0.0.0.0 (BSD) or 255.255.255.255 (everybody else)? > > ??? --karl-- > > > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. -- My email server only sends and accepts starttls encrypted mail in transit. One benefit - it stops all spams thus far, cold. If you are not encrypting by default you are not going to get my mail or I, yours. From jack at 3kitty.org Sat Mar 30 13:33:53 2019 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 30 Mar 2019 13:33:53 -0700 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> Message-ID: <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> The Unix-box problem was the only one I recall.? However, we did move the testing world onto a separate LAN so anything bad that some random box did wouldn't affect everyone.? So it may have happened but we didn't care.?? Our mission was to get the software working.... /Jack On 3/30/19 12:41 AM, Olivier MJ Cr?pin-Leblond wrote: > Dear Jack, > > I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000 > compatible cards on the same network. > Having started with Novell & 3COM cards, all on Coax, we found that we > started getting timeouts when we added more cheap NE2000 compatible cards. > Did the same thing with oscilloscopes/analysers and tweaked parameters > to go around this problem. > Warm regards, > > Olivier > > On 30/03/2019 02:57, Jack Haverty wrote: >> I can confirm that there was at least one Unix vendor that violated the >> Ethernet specs (10mb/s).? I was at Oracle in the early 90s, where we had >> at least one of every common computer so that we could test software. >> >> While testing, we noticed that when one particular type of machine was >> active doing a long bulk transfer, all of the other traffic on our LAN >> slowed to a crawl.?? I was a hardware guy in a software universe, but I >> managed to find one other hardware type, and we scrounged up an >> oscilloscope, and then looked closely at the wire and at the spec. >> >> I don't remember the details, but there was some timer that was supposed >> to have a certain minimum value and that Unix box was consistently >> violating it.? So it could effectively seize the LAN for as long as it >> had traffic. >> >> Sorry, I can't remember which vendor it was.? It might have been Sun, or >> maybe one specific model/vintage, since we had a lot of Sun equipment >> but hadn't noticed the problem before. >> >> I suspect there's a lot of such "standards" that are routinely violated >> in the network.?? Putting it on paper and declaring it mandatory doesn't >> make it true.? Personally I never saw much rigorous certification >> testing or enforcement (not just of Ethernet), and the general >> "robustness" designs can hide bad behavior. >> >> /Jack Haverty >> >> >> On 3/29/19 5:40 PM, John Gilmore wrote: >>> Karl Auerbach wrote: >>>> I recently had someone confirm a widely held belief that Sun >>>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces >>>> to have a winning bias against Ethernet machines that adhered to the >>>> IEEE/DIX ethernet timer values. Those of us who tended to work with >>>> networked PC platforms were well aware of the effect of putting a Sun >>>> onto the same Ethernet: what had worked before stopped working, but >>>> the Suns all chatted among themselves quite happily. >>> Are we talking about 10 Mbit Ethernet, or something later? >>> >>> I worked at Sun back then. Sun was shipping products with Ethernet >>> before the IBM PC even existed. Sun products used standard Ethernet >>> chips. Some of those chips were super customizable via internal >>> registers (I have a T1 card that uses an Ethernet chip with settings >>> that let it talk telco T1/DS1 protocol!), but Sun always set them to >>> meet the standard specs. What evidence is there of any non-standard >>> settings? >>> >>> What Sun did differently was that we tuned the implementation so it >>> could actually send and receive back-to-back packets, at the minimum >>> specified inter-packet gaps. By building both the hardware and the >>> software ourselves (like Apple today, and unlike Microsoft), we were >>> able to work out all the kinks to maximize performance. We could >>> improve everything: software drivers, interrupt latencies, TCP/IP >>> stacks, DMA bus arbitration overhead. Sun was the first to do >>> production shared disk-drive access over Ethernet, to reduce the cost of >>> our "diskless" workstations. In sending 4Kbyte filesystem blocks among >>> client and server, we sent an IP-fragmented 4K+ UDP datagram in three >>> BACK-TO-BACK Ethernet packets. >>> >>> Someone, I think it was Van Jacobson, did some early work on maximizing >>> Ethernet thruput, and reported it at USENIX conferences. His >>> observation was that to get maximal thruput, you needed 3 things to be >>> happening absolutely simultaneously: the sender processing & queueing >>> the next packet; the Ethernet wire moving the current packet; the >>> receiver dequeueing and processing the previous packet. If any of these >>> operations took longer than the others, then that would be the limiting >>> factor in the thruput. This applies to half duplex operation (only one >>> side transmits at a time); the end node processing requirement doubles >>> if you run full duplex data in both directions (on more modern Ethernets >>> that support that) . >>> >>> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his >>> favorite things to work on was network performance. Here's one of his >>> signature blocks from 1996: >>> >>> Yow! 11.26 MB/s remote host TCP bandwidth & //// >>> 199 usec remote TCP latency over 100Mb/s //// >>> ethernet. Beat that! //// >>> -----------------------------------------////__________ o >>> David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< >>> >>> My guess is that the ISA cards of the day had never even *seen* back to >>> back Ethernet packets (with only the 9.6 uSec interframe spacing between >>> them), so of course they weren't tested to be able to handle them. The >>> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so >>> most cards just had one or two packet buffers. And if the CPU didn't >>> immediately grab one of those received buffers, then the next packet >>> would get dropped for lack of a buffer to put it in. In sending, you >>> had to have the second buffer queued long before the inter-packet gap, or >>> you wouldn't send with minimum packet spacing on the wire. Most PC >>> operating systems couldn't do that. And if your card was slower than >>> the standard 9.6usec inter-packet gap after sensing carrier, >>> then any Sun waiting to transmit would beat your card to the wire, >>> deferring your card's transmission. >>> >>> You may have also been seeing the "Channel capture effect"; see: >>> >>> https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect >>> >>> John >>> > From gnu at toad.com Sun Mar 31 02:53:24 2019 From: gnu at toad.com (John Gilmore) Date: Sun, 31 Mar 2019 02:53:24 -0700 Subject: [ih] Internet History - broadcast addresses In-Reply-To: References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> Message-ID: <4707.1554026004@hop.toad.com> Karl Auerbach wrote: > Wasn't there also some disagreement over whether the IPv4 broadcast > address was 0.0.0.0 (BSD) or 255.255.255.255 (everybody else)? Well, no; it was worse than that! IP is still limping with the scars of that mess. 4.2BSD defaulted to using the all-zeros node address of EACH SUBNET to being the broadcast address. The IP RFCs (later) ultimately decided to use the all-ones node address of EACH SUBNET. (In both cases, read "network" for subnet, since subnetting hadn't been invented yet.) You can see this in the first "standard" RFC for IP over Ethernet, RFC 894 by Charles Hornig of Symbolics: https://www.rfc-editor.org/rfc/rfc894.txt The broadcast Internet address (the address on that network with a host part of all binary ones) should be mapped to the broadcast Ethernet address (of all binary ones, FF-FF-FF-FF-FF-FF hex). ... Unix 4.2bsd also uses a non-standard Internet broadcast address with a host part of all zeroes, this may also be changed in the future.) I dug around in the great old tcp-ip mailing list's archives, and found a few messages about this: http://securitydigest.org/tcp-ip/archive/1984/05 Date: Mon, 7 May 84 10:35 EDT From: Charles Hornig ... [RFC894] Broadcast Address The broadcast Internet address (the address on that network with a host part of all binary ones) should be mapped to the broadcast Ethernet address (of all binary ones, FF-FF-FF-FF-FF-FF hex). This is the first I've heard of this! What's wrong with the 4.2 convention? [I've always made the (mostly correct) assumption that \no host on a network would be assigned address zero/. This has the useful property that no additional context is necessary to differentiate a "network address" from "a particular host" on that network. It's not hard to intuit that (for a broadcast network) sending \to the network/ is meaningful.] There's also a practical justification--it's easier to check for a zero address than all ones (which is different for Class A, B, C networks). Sure you don't want to reconsider? I [Charles] picked all ones because that was the way other documents referred to it (I think IEN 212 is the right place). I also think that all ones is better than all zeroes. The zero value is likely to appear as a network name and reusing it as a broadcast address might lead to confusion between these ideas. IEN 212 did indeed recommend reserving all-ones for broadcast, probably as a mirror of the Ethernet FF:FF:FF:FF:FF:FF broadcast address: https://www.rfc-editor.org/ien/ien212.txt We propose to define the IP broadcast address to be the IP address in each class with all its local host part bits on. (E.G., A.255.255.255 for class A, A.B.255.255 for class B, and A.B.C.255 for class C.) In each case, the address would map to the local network broadcast address if broadcast addressing was supported. On messages coming from other networks, the mapping would be done in the gateway. If a network did not support broadcast addressing, an ICMP destination unreachable message would be returned. But that IEN foolishly claimed that: The only "cost" of this mechanism is that it reserves one IP address from each class. There were only three classes (A, B, and C), so it claimed to only reserve 3 addresses. But actually, it reserved one IP address from each NETWORK NUMBER, of which there were tens of thousands. And worse, it reserved those IP addresses even in networks that do not use a LAN and do not support broadcast addresses (by claiming that "Destination Unreachable" would be the result of attempted communication). Here is an excerpt from a 1986 discussion from TCP-IP that talks about how to binary-patch subnets into your Suns that use 4.2BSD networking stacks so they won't make broadcast storms based on different subnet broadcast addresses: http://securitydigest.org/tcp-ip/archive/1986/08 Date: Mon, 11-Aug-86 09:08:38 EDT From: HEDRICK at RED.RUTGERS.EDU (Charles Hedrick) To: mod.protocols.tcp-ip Subject: subnetting on Suns ... You need to decide what you want your broadcast address to be. 4.2 and the Sun kernel use the convention that the broadcast address uses a host number of 0. The newest convention, implemented in 4.3, uses a host number of -1, i.e. 255. The standards were only changed recently, so must vendors have not caught up. Consider our network, which is a class B network, 128.6. On subnet 4, there are the following possible broadcast addresses: 128.6.0.0 - used by 4.2 128.6.4.0 - used by 4.2 if you install subnets 128.6.255.255 - used by 4.3, I think 128.6.4.255 - used by 4.3 with subnets turned on, I think 255.255.255.255 - not used by any, but recognized by Sun and 4.3 When you enable subnets, the subnet number becomes in effect part of your network number. Thus the network number is 128.6.x, not just 128.6. So the broadcast address ends up having the subnet number as part of it. Now, the problem is that the Suns are set up such that every machine on the network, including non-Suns, must agree on the broadcast address. Otherwise there will be chaos and your network will be flooded with spurious packets, causing all of your machines to become unusable. ... This kind of packet storm problems led to banning the use of 0 as a unicast host address in 1989 in Host Requirements, RFC 1122. https://www.rfc-editor.org/rfc/rfc1122.txt DISCUSSION: Silent discard of erroneous datagrams is generally intended to prevent "broadcast storms". There's a whole section 3.3.6 in there that talks about broadcasting, and other sections that require that the network layer pass "up" to the IP layer an indication of whether the packet was broadcast or not, so that network-layer broadcasts with unrecognized IP addresses can be discarded rather than responded to. It also recommends sending LAN broadcasts to 255.255.255.255 rather than the actual reserved subnet broadcast address, to improve interoperability. So, as of today, TWO addresses per subnet are still reserved: 0 and all-ones. And with CIDR, we have millions of subnets, not just tens of thousands, so we're wasting millions of addresses. I suggest that, now that 4.2BSD is dead and gone, the protocols should be revised to allow people to use the 0 address in each subnet for a real unicast node, instead of reserving it. And on non-LANs, we should allow the all-ones address for a real node as well. But that's about the future, not about internet-history. John From richard at bennett.com Sun Mar 31 14:10:31 2019 From: richard at bennett.com (Richard Bennett) Date: Sun, 31 Mar 2019 15:10:31 -0600 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> Message-ID: <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> Ethernet was total crap before 10BASE-T for many reasons. The original 3Com card, the 3C501, used a SEEQ 8001 chip that couldn?t handle incoming packets with less than 80ms interframe gap because it had to handle an interrupt and complete a DMA operation for the first packet before it could start the reception of the next one. The 3Com server NIC, the 3C-505, had an embedded CPU and first chip that actually supported the standard, the 82586. The server card added delay when it knew it was talking to 3C501 so as to avoid making it choke. AMD?s first Ethernet chip seeded its random number generator (need for the random exponential binary backoff algorithm) at power up, so a short power fail caused stations to synchronize, making collision resolution impossible. Most packets are generated by servers, so clients close to servers had higher priority than those farther away, an effect of propagation. All in all, coax Ethernet was a horrible design in practice. The first really commendable Ethernet chips were the 3Com 3C509 parallel tasking adapters. They took a linked list of buffer fragments from the driver and delivered packets direct to user space from a smallish FIFO in the chip. These cards also predicted interrupt latency, firing off the reception logic in the driver at a time calculated to have the driver running at or before EOF. This made for some fun code when the driver was ready for a packet that wasn?t quite there yet. The 509 hit the market in 1994, at a time when the tech press liked to run speed/CPU load tests of Ethernet cards and the market for chips was dominated by 3Com and Intel. IIRC, these cards ran at some like 98% of wire speed. The parallel tasking thing allowed 3Com to register some very valuable patents that outlived the company. Patent trolls ended up exploiting them, which expert witnesses to collect some nice fees until they expired. 10BASE-T did away with CSMA-CD in favor of a full duplex transmission mode that buffered packets in the hub, did flow control, and ultimately allowed multiple frames to traverse switches at the same time, kinda like SDMA and beam forming in wireless systems. Hilariously, experts from Ivy League universities working for patent trolls continued to claim Ethernet was a half duplex, CSMA-CD system until the bitter end. RB > On Mar 30, 2019, at 2:33 PM, Jack Haverty wrote: > > The Unix-box problem was the only one I recall. However, we did move > the testing world onto a separate LAN so anything bad that some random > box did wouldn't affect everyone. So it may have happened but we didn't > care. Our mission was to get the software working.... > > /Jack > > On 3/30/19 12:41 AM, Olivier MJ Cr?pin-Leblond wrote: >> Dear Jack, >> >> I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000 >> compatible cards on the same network. >> Having started with Novell & 3COM cards, all on Coax, we found that we >> started getting timeouts when we added more cheap NE2000 compatible cards. >> Did the same thing with oscilloscopes/analysers and tweaked parameters >> to go around this problem. >> Warm regards, >> >> Olivier >> >> On 30/03/2019 02:57, Jack Haverty wrote: >>> I can confirm that there was at least one Unix vendor that violated the >>> Ethernet specs (10mb/s). I was at Oracle in the early 90s, where we had >>> at least one of every common computer so that we could test software. >>> >>> While testing, we noticed that when one particular type of machine was >>> active doing a long bulk transfer, all of the other traffic on our LAN >>> slowed to a crawl. I was a hardware guy in a software universe, but I >>> managed to find one other hardware type, and we scrounged up an >>> oscilloscope, and then looked closely at the wire and at the spec. >>> >>> I don't remember the details, but there was some timer that was supposed >>> to have a certain minimum value and that Unix box was consistently >>> violating it. So it could effectively seize the LAN for as long as it >>> had traffic. >>> >>> Sorry, I can't remember which vendor it was. It might have been Sun, or >>> maybe one specific model/vintage, since we had a lot of Sun equipment >>> but hadn't noticed the problem before. >>> >>> I suspect there's a lot of such "standards" that are routinely violated >>> in the network. Putting it on paper and declaring it mandatory doesn't >>> make it true. Personally I never saw much rigorous certification >>> testing or enforcement (not just of Ethernet), and the general >>> "robustness" designs can hide bad behavior. >>> >>> /Jack Haverty >>> >>> >>> On 3/29/19 5:40 PM, John Gilmore wrote: >>>> Karl Auerbach wrote: >>>>> I recently had someone confirm a widely held belief that Sun >>>>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces >>>>> to have a winning bias against Ethernet machines that adhered to the >>>>> IEEE/DIX ethernet timer values. Those of us who tended to work with >>>>> networked PC platforms were well aware of the effect of putting a Sun >>>>> onto the same Ethernet: what had worked before stopped working, but >>>>> the Suns all chatted among themselves quite happily. >>>> Are we talking about 10 Mbit Ethernet, or something later? >>>> >>>> I worked at Sun back then. Sun was shipping products with Ethernet >>>> before the IBM PC even existed. Sun products used standard Ethernet >>>> chips. Some of those chips were super customizable via internal >>>> registers (I have a T1 card that uses an Ethernet chip with settings >>>> that let it talk telco T1/DS1 protocol!), but Sun always set them to >>>> meet the standard specs. What evidence is there of any non-standard >>>> settings? >>>> >>>> What Sun did differently was that we tuned the implementation so it >>>> could actually send and receive back-to-back packets, at the minimum >>>> specified inter-packet gaps. By building both the hardware and the >>>> software ourselves (like Apple today, and unlike Microsoft), we were >>>> able to work out all the kinks to maximize performance. We could >>>> improve everything: software drivers, interrupt latencies, TCP/IP >>>> stacks, DMA bus arbitration overhead. Sun was the first to do >>>> production shared disk-drive access over Ethernet, to reduce the cost of >>>> our "diskless" workstations. In sending 4Kbyte filesystem blocks among >>>> client and server, we sent an IP-fragmented 4K+ UDP datagram in three >>>> BACK-TO-BACK Ethernet packets. >>>> >>>> Someone, I think it was Van Jacobson, did some early work on maximizing >>>> Ethernet thruput, and reported it at USENIX conferences. His >>>> observation was that to get maximal thruput, you needed 3 things to be >>>> happening absolutely simultaneously: the sender processing & queueing >>>> the next packet; the Ethernet wire moving the current packet; the >>>> receiver dequeueing and processing the previous packet. If any of these >>>> operations took longer than the others, then that would be the limiting >>>> factor in the thruput. This applies to half duplex operation (only one >>>> side transmits at a time); the end node processing requirement doubles >>>> if you run full duplex data in both directions (on more modern Ethernets >>>> that support that) . >>>> >>>> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his >>>> favorite things to work on was network performance. Here's one of his >>>> signature blocks from 1996: >>>> >>>> Yow! 11.26 MB/s remote host TCP bandwidth & //// >>>> 199 usec remote TCP latency over 100Mb/s //// >>>> ethernet. Beat that! //// >>>> -----------------------------------------////__________ o >>>> David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< >>>> >>>> My guess is that the ISA cards of the day had never even *seen* back to >>>> back Ethernet packets (with only the 9.6 uSec interframe spacing between >>>> them), so of course they weren't tested to be able to handle them. The >>>> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so >>>> most cards just had one or two packet buffers. And if the CPU didn't >>>> immediately grab one of those received buffers, then the next packet >>>> would get dropped for lack of a buffer to put it in. In sending, you >>>> had to have the second buffer queued long before the inter-packet gap, or >>>> you wouldn't send with minimum packet spacing on the wire. Most PC >>>> operating systems couldn't do that. And if your card was slower than >>>> the standard 9.6usec inter-packet gap after sensing carrier, >>>> then any Sun waiting to transmit would beat your card to the wire, >>>> deferring your card's transmission. >>>> >>>> You may have also been seeing the "Channel capture effect"; see: >>>> >>>> https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect >>>> >>>> John >>>> >> > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. ? Richard Bennett High Tech Forum Founder Ethernet & Wi-Fi standards co-creator Internet Policy Consultant -------------- next part -------------- An HTML attachment was scrubbed... URL: From sob at sobco.com Sun Mar 31 15:23:10 2019 From: sob at sobco.com (Scott O. Bradner) Date: Sun, 31 Mar 2019 18:23:10 -0400 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> Message-ID: <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> might have been crap but there sure was a lot of it, and it worked well enough to dominate the LAN space over token ring 10BaseT (and the multiple pre-standard twisted pair Ethernet systems) expanded the coverage hugely but a lot of the original yellow cable was deployed Scott > On Mar 31, 2019, at 5:10 PM, Richard Bennett wrote: > > Ethernet was total crap before 10BASE-T for many reasons. The original 3Com card, the 3C501, used a SEEQ 8001 chip that couldn?t handle incoming packets with less than 80ms interframe gap because it had to handle an interrupt and complete a DMA operation for the first packet before it could start the reception of the next one. The 3Com server NIC, the 3C-505, had an embedded CPU and first chip that actually supported the standard, the 82586. The server card added delay when it knew it was talking to 3C501 so as to avoid making it choke. > > AMD?s first Ethernet chip seeded its random number generator (need for the random exponential binary backoff algorithm) at power up, so a short power fail caused stations to synchronize, making collision resolution impossible. > > Most packets are generated by servers, so clients close to servers had higher priority than those farther away, an effect of propagation. > > All in all, coax Ethernet was a horrible design in practice. > > The first really commendable Ethernet chips were the 3Com 3C509 parallel tasking adapters. They took a linked list of buffer fragments from the driver and delivered packets direct to user space from a smallish FIFO in the chip. These cards also predicted interrupt latency, firing off the reception logic in the driver at a time calculated to have the driver running at or before EOF. This made for some fun code when the driver was ready for a packet that wasn?t quite there yet. > > The 509 hit the market in 1994, at a time when the tech press liked to run speed/CPU load tests of Ethernet cards and the market for chips was dominated by 3Com and Intel. IIRC, these cards ran at some like 98% of wire speed. > > The parallel tasking thing allowed 3Com to register some very valuable patents that outlived the company. Patent trolls ended up exploiting them, which expert witnesses to collect some nice fees until they expired. > > 10BASE-T did away with CSMA-CD in favor of a full duplex transmission mode that buffered packets in the hub, did flow control, and ultimately allowed multiple frames to traverse switches at the same time, kinda like SDMA and beam forming in wireless systems. Hilariously, experts from Ivy League universities working for patent trolls continued to claim Ethernet was a half duplex, CSMA-CD system until the bitter end. > > RB > >> On Mar 30, 2019, at 2:33 PM, Jack Haverty wrote: >> >> The Unix-box problem was the only one I recall. However, we did move >> the testing world onto a separate LAN so anything bad that some random >> box did wouldn't affect everyone. So it may have happened but we didn't >> care. Our mission was to get the software working.... >> >> /Jack >> >> On 3/30/19 12:41 AM, Olivier MJ Cr?pin-Leblond wrote: >>> Dear Jack, >>> >>> I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000 >>> compatible cards on the same network. >>> Having started with Novell & 3COM cards, all on Coax, we found that we >>> started getting timeouts when we added more cheap NE2000 compatible cards. >>> Did the same thing with oscilloscopes/analysers and tweaked parameters >>> to go around this problem. >>> Warm regards, >>> >>> Olivier >>> >>> On 30/03/2019 02:57, Jack Haverty wrote: >>>> I can confirm that there was at least one Unix vendor that violated the >>>> Ethernet specs (10mb/s). I was at Oracle in the early 90s, where we had >>>> at least one of every common computer so that we could test software. >>>> >>>> While testing, we noticed that when one particular type of machine was >>>> active doing a long bulk transfer, all of the other traffic on our LAN >>>> slowed to a crawl. I was a hardware guy in a software universe, but I >>>> managed to find one other hardware type, and we scrounged up an >>>> oscilloscope, and then looked closely at the wire and at the spec. >>>> >>>> I don't remember the details, but there was some timer that was supposed >>>> to have a certain minimum value and that Unix box was consistently >>>> violating it. So it could effectively seize the LAN for as long as it >>>> had traffic. >>>> >>>> Sorry, I can't remember which vendor it was. It might have been Sun, or >>>> maybe one specific model/vintage, since we had a lot of Sun equipment >>>> but hadn't noticed the problem before. >>>> >>>> I suspect there's a lot of such "standards" that are routinely violated >>>> in the network. Putting it on paper and declaring it mandatory doesn't >>>> make it true. Personally I never saw much rigorous certification >>>> testing or enforcement (not just of Ethernet), and the general >>>> "robustness" designs can hide bad behavior. >>>> >>>> /Jack Haverty >>>> >>>> >>>> On 3/29/19 5:40 PM, John Gilmore wrote: >>>>> Karl Auerbach wrote: >>>>>> I recently had someone confirm a widely held belief that Sun >>>>>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces >>>>>> to have a winning bias against Ethernet machines that adhered to the >>>>>> IEEE/DIX ethernet timer values. Those of us who tended to work with >>>>>> networked PC platforms were well aware of the effect of putting a Sun >>>>>> onto the same Ethernet: what had worked before stopped working, but >>>>>> the Suns all chatted among themselves quite happily. >>>>> Are we talking about 10 Mbit Ethernet, or something later? >>>>> >>>>> I worked at Sun back then. Sun was shipping products with Ethernet >>>>> before the IBM PC even existed. Sun products used standard Ethernet >>>>> chips. Some of those chips were super customizable via internal >>>>> registers (I have a T1 card that uses an Ethernet chip with settings >>>>> that let it talk telco T1/DS1 protocol!), but Sun always set them to >>>>> meet the standard specs. What evidence is there of any non-standard >>>>> settings? >>>>> >>>>> What Sun did differently was that we tuned the implementation so it >>>>> could actually send and receive back-to-back packets, at the minimum >>>>> specified inter-packet gaps. By building both the hardware and the >>>>> software ourselves (like Apple today, and unlike Microsoft), we were >>>>> able to work out all the kinks to maximize performance. We could >>>>> improve everything: software drivers, interrupt latencies, TCP/IP >>>>> stacks, DMA bus arbitration overhead. Sun was the first to do >>>>> production shared disk-drive access over Ethernet, to reduce the cost of >>>>> our "diskless" workstations. In sending 4Kbyte filesystem blocks among >>>>> client and server, we sent an IP-fragmented 4K+ UDP datagram in three >>>>> BACK-TO-BACK Ethernet packets. >>>>> >>>>> Someone, I think it was Van Jacobson, did some early work on maximizing >>>>> Ethernet thruput, and reported it at USENIX conferences. His >>>>> observation was that to get maximal thruput, you needed 3 things to be >>>>> happening absolutely simultaneously: the sender processing & queueing >>>>> the next packet; the Ethernet wire moving the current packet; the >>>>> receiver dequeueing and processing the previous packet. If any of these >>>>> operations took longer than the others, then that would be the limiting >>>>> factor in the thruput. This applies to half duplex operation (only one >>>>> side transmits at a time); the end node processing requirement doubles >>>>> if you run full duplex data in both directions (on more modern Ethernets >>>>> that support that) . >>>>> >>>>> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his >>>>> favorite things to work on was network performance. Here's one of his >>>>> signature blocks from 1996: >>>>> >>>>> Yow! 11.26 MB/s remote host TCP bandwidth & //// >>>>> 199 usec remote TCP latency over 100Mb/s //// >>>>> ethernet. Beat that! //// >>>>> -----------------------------------------////__________ o >>>>> David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< >>>>> >>>>> My guess is that the ISA cards of the day had never even *seen* back to >>>>> back Ethernet packets (with only the 9.6 uSec interframe spacing between >>>>> them), so of course they weren't tested to be able to handle them. The >>>>> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so >>>>> most cards just had one or two packet buffers. And if the CPU didn't >>>>> immediately grab one of those received buffers, then the next packet >>>>> would get dropped for lack of a buffer to put it in. In sending, you >>>>> had to have the second buffer queued long before the inter-packet gap, or >>>>> you wouldn't send with minimum packet spacing on the wire. Most PC >>>>> operating systems couldn't do that. And if your card was slower than >>>>> the standard 9.6usec inter-packet gap after sensing carrier, >>>>> then any Sun waiting to transmit would beat your card to the wire, >>>>> deferring your card's transmission. >>>>> >>>>> You may have also been seeing the "Channel capture effect"; see: >>>>> >>>>> https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect >>>>> >>>>> John >>>>> >>> >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. > > ? > Richard Bennett > High Tech Forum Founder > Ethernet & Wi-Fi standards co-creator > > Internet Policy Consultant > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. From richard at bennett.com Sun Mar 31 15:43:37 2019 From: richard at bennett.com (Richard Bennett) Date: Sun, 31 Mar 2019 16:43:37 -0600 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> Message-ID: Ethernet is a catchy name, I?ll give you that. 3Com quickly discovered that golden rod cable was a huge mistake and replaced it with thin coax, integrated transceivers, and BNC connectors. In reality, the number of golden rod installations in the whole world never numbered much more than thousand; that was good because there was no way to upgrade them to higher speeds. > On Mar 31, 2019, at 4:23 PM, Scott O. Bradner wrote: > > might have been crap but there sure was a lot of it, and it worked well enough to dominate the LAN space over > token ring > > 10BaseT (and the multiple pre-standard twisted pair Ethernet systems) expanded the coverage hugely but > a lot of the original yellow cable was deployed > > Scott > >> On Mar 31, 2019, at 5:10 PM, Richard Bennett wrote: >> >> Ethernet was total crap before 10BASE-T for many reasons. The original 3Com card, the 3C501, used a SEEQ 8001 chip that couldn?t handle incoming packets with less than 80ms interframe gap because it had to handle an interrupt and complete a DMA operation for the first packet before it could start the reception of the next one. The 3Com server NIC, the 3C-505, had an embedded CPU and first chip that actually supported the standard, the 82586. The server card added delay when it knew it was talking to 3C501 so as to avoid making it choke. >> >> AMD?s first Ethernet chip seeded its random number generator (need for the random exponential binary backoff algorithm) at power up, so a short power fail caused stations to synchronize, making collision resolution impossible. >> >> Most packets are generated by servers, so clients close to servers had higher priority than those farther away, an effect of propagation. >> >> All in all, coax Ethernet was a horrible design in practice. >> >> The first really commendable Ethernet chips were the 3Com 3C509 parallel tasking adapters. They took a linked list of buffer fragments from the driver and delivered packets direct to user space from a smallish FIFO in the chip. These cards also predicted interrupt latency, firing off the reception logic in the driver at a time calculated to have the driver running at or before EOF. This made for some fun code when the driver was ready for a packet that wasn?t quite there yet. >> >> The 509 hit the market in 1994, at a time when the tech press liked to run speed/CPU load tests of Ethernet cards and the market for chips was dominated by 3Com and Intel. IIRC, these cards ran at some like 98% of wire speed. >> >> The parallel tasking thing allowed 3Com to register some very valuable patents that outlived the company. Patent trolls ended up exploiting them, which expert witnesses to collect some nice fees until they expired. >> >> 10BASE-T did away with CSMA-CD in favor of a full duplex transmission mode that buffered packets in the hub, did flow control, and ultimately allowed multiple frames to traverse switches at the same time, kinda like SDMA and beam forming in wireless systems. Hilariously, experts from Ivy League universities working for patent trolls continued to claim Ethernet was a half duplex, CSMA-CD system until the bitter end. >> >> RB >> >>> On Mar 30, 2019, at 2:33 PM, Jack Haverty wrote: >>> >>> The Unix-box problem was the only one I recall. However, we did move >>> the testing world onto a separate LAN so anything bad that some random >>> box did wouldn't affect everyone. So it may have happened but we didn't >>> care. Our mission was to get the software working.... >>> >>> /Jack >>> >>> On 3/30/19 12:41 AM, Olivier MJ Cr?pin-Leblond wrote: >>>> Dear Jack, >>>> >>>> I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000 >>>> compatible cards on the same network. >>>> Having started with Novell & 3COM cards, all on Coax, we found that we >>>> started getting timeouts when we added more cheap NE2000 compatible cards. >>>> Did the same thing with oscilloscopes/analysers and tweaked parameters >>>> to go around this problem. >>>> Warm regards, >>>> >>>> Olivier >>>> >>>> On 30/03/2019 02:57, Jack Haverty wrote: >>>>> I can confirm that there was at least one Unix vendor that violated the >>>>> Ethernet specs (10mb/s). I was at Oracle in the early 90s, where we had >>>>> at least one of every common computer so that we could test software. >>>>> >>>>> While testing, we noticed that when one particular type of machine was >>>>> active doing a long bulk transfer, all of the other traffic on our LAN >>>>> slowed to a crawl. I was a hardware guy in a software universe, but I >>>>> managed to find one other hardware type, and we scrounged up an >>>>> oscilloscope, and then looked closely at the wire and at the spec. >>>>> >>>>> I don't remember the details, but there was some timer that was supposed >>>>> to have a certain minimum value and that Unix box was consistently >>>>> violating it. So it could effectively seize the LAN for as long as it >>>>> had traffic. >>>>> >>>>> Sorry, I can't remember which vendor it was. It might have been Sun, or >>>>> maybe one specific model/vintage, since we had a lot of Sun equipment >>>>> but hadn't noticed the problem before. >>>>> >>>>> I suspect there's a lot of such "standards" that are routinely violated >>>>> in the network. Putting it on paper and declaring it mandatory doesn't >>>>> make it true. Personally I never saw much rigorous certification >>>>> testing or enforcement (not just of Ethernet), and the general >>>>> "robustness" designs can hide bad behavior. >>>>> >>>>> /Jack Haverty >>>>> >>>>> >>>>> On 3/29/19 5:40 PM, John Gilmore wrote: >>>>>> Karl Auerbach wrote: >>>>>>> I recently had someone confirm a widely held belief that Sun >>>>>>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces >>>>>>> to have a winning bias against Ethernet machines that adhered to the >>>>>>> IEEE/DIX ethernet timer values. Those of us who tended to work with >>>>>>> networked PC platforms were well aware of the effect of putting a Sun >>>>>>> onto the same Ethernet: what had worked before stopped working, but >>>>>>> the Suns all chatted among themselves quite happily. >>>>>> Are we talking about 10 Mbit Ethernet, or something later? >>>>>> >>>>>> I worked at Sun back then. Sun was shipping products with Ethernet >>>>>> before the IBM PC even existed. Sun products used standard Ethernet >>>>>> chips. Some of those chips were super customizable via internal >>>>>> registers (I have a T1 card that uses an Ethernet chip with settings >>>>>> that let it talk telco T1/DS1 protocol!), but Sun always set them to >>>>>> meet the standard specs. What evidence is there of any non-standard >>>>>> settings? >>>>>> >>>>>> What Sun did differently was that we tuned the implementation so it >>>>>> could actually send and receive back-to-back packets, at the minimum >>>>>> specified inter-packet gaps. By building both the hardware and the >>>>>> software ourselves (like Apple today, and unlike Microsoft), we were >>>>>> able to work out all the kinks to maximize performance. We could >>>>>> improve everything: software drivers, interrupt latencies, TCP/IP >>>>>> stacks, DMA bus arbitration overhead. Sun was the first to do >>>>>> production shared disk-drive access over Ethernet, to reduce the cost of >>>>>> our "diskless" workstations. In sending 4Kbyte filesystem blocks among >>>>>> client and server, we sent an IP-fragmented 4K+ UDP datagram in three >>>>>> BACK-TO-BACK Ethernet packets. >>>>>> >>>>>> Someone, I think it was Van Jacobson, did some early work on maximizing >>>>>> Ethernet thruput, and reported it at USENIX conferences. His >>>>>> observation was that to get maximal thruput, you needed 3 things to be >>>>>> happening absolutely simultaneously: the sender processing & queueing >>>>>> the next packet; the Ethernet wire moving the current packet; the >>>>>> receiver dequeueing and processing the previous packet. If any of these >>>>>> operations took longer than the others, then that would be the limiting >>>>>> factor in the thruput. This applies to half duplex operation (only one >>>>>> side transmits at a time); the end node processing requirement doubles >>>>>> if you run full duplex data in both directions (on more modern Ethernets >>>>>> that support that) . >>>>>> >>>>>> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his >>>>>> favorite things to work on was network performance. Here's one of his >>>>>> signature blocks from 1996: >>>>>> >>>>>> Yow! 11.26 MB/s remote host TCP bandwidth & //// >>>>>> 199 usec remote TCP latency over 100Mb/s //// >>>>>> ethernet. Beat that! //// >>>>>> -----------------------------------------////__________ o >>>>>> David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< >>>>>> >>>>>> My guess is that the ISA cards of the day had never even *seen* back to >>>>>> back Ethernet packets (with only the 9.6 uSec interframe spacing between >>>>>> them), so of course they weren't tested to be able to handle them. The >>>>>> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so >>>>>> most cards just had one or two packet buffers. And if the CPU didn't >>>>>> immediately grab one of those received buffers, then the next packet >>>>>> would get dropped for lack of a buffer to put it in. In sending, you >>>>>> had to have the second buffer queued long before the inter-packet gap, or >>>>>> you wouldn't send with minimum packet spacing on the wire. Most PC >>>>>> operating systems couldn't do that. And if your card was slower than >>>>>> the standard 9.6usec inter-packet gap after sensing carrier, >>>>>> then any Sun waiting to transmit would beat your card to the wire, >>>>>> deferring your card's transmission. >>>>>> >>>>>> You may have also been seeing the "Channel capture effect"; see: >>>>>> >>>>>> https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect >>>>>> >>>>>> John >>>>>> >>>> >>> _______ >>> internet-history mailing list >>> internet-history at postel.org >>> http://mailman.postel.org/mailman/listinfo/internet-history >>> Contact list-owner at postel.org for assistance. >> >> ? >> Richard Bennett >> High Tech Forum Founder >> Ethernet & Wi-Fi standards co-creator >> >> Internet Policy Consultant >> >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. > > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. ? Richard Bennett High Tech Forum Founder Ethernet & Wi-Fi standards co-creator Internet Policy Consultant -------------- next part -------------- An HTML attachment was scrubbed... URL: From sob at sobco.com Sun Mar 31 15:56:15 2019 From: sob at sobco.com (Scott O. Bradner) Date: Sun, 31 Mar 2019 18:56:15 -0400 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> Message-ID: hmm - we had a few hundred at Harvard at the peak so I find it hard to think that there were only a thousand world wide Scott > On Mar 31, 2019, at 6:43 PM, Richard Bennett wrote: > > Ethernet is a catchy name, I?ll give you that. > > 3Com quickly discovered that golden rod cable was a huge mistake and replaced it with thin coax, integrated transceivers, and BNC connectors. In reality, the number of golden rod installations in the whole world never numbered much more than thousand; that was good because there was no way to upgrade them to higher speeds. > >> On Mar 31, 2019, at 4:23 PM, Scott O. Bradner wrote: >> >> might have been crap but there sure was a lot of it, and it worked well enough to dominate the LAN space over >> token ring >> >> 10BaseT (and the multiple pre-standard twisted pair Ethernet systems) expanded the coverage hugely but >> a lot of the original yellow cable was deployed >> >> Scott >> >>> On Mar 31, 2019, at 5:10 PM, Richard Bennett wrote: >>> >>> Ethernet was total crap before 10BASE-T for many reasons. The original 3Com card, the 3C501, used a SEEQ 8001 chip that couldn?t handle incoming packets with less than 80ms interframe gap because it had to handle an interrupt and complete a DMA operation for the first packet before it could start the reception of the next one. The 3Com server NIC, the 3C-505, had an embedded CPU and first chip that actually supported the standard, the 82586. The server card added delay when it knew it was talking to 3C501 so as to avoid making it choke. >>> >>> AMD?s first Ethernet chip seeded its random number generator (need for the random exponential binary backoff algorithm) at power up, so a short power fail caused stations to synchronize, making collision resolution impossible. >>> >>> Most packets are generated by servers, so clients close to servers had higher priority than those farther away, an effect of propagation. >>> >>> All in all, coax Ethernet was a horrible design in practice. >>> >>> The first really commendable Ethernet chips were the 3Com 3C509 parallel tasking adapters. They took a linked list of buffer fragments from the driver and delivered packets direct to user space from a smallish FIFO in the chip. These cards also predicted interrupt latency, firing off the reception logic in the driver at a time calculated to have the driver running at or before EOF. This made for some fun code when the driver was ready for a packet that wasn?t quite there yet. >>> >>> The 509 hit the market in 1994, at a time when the tech press liked to run speed/CPU load tests of Ethernet cards and the market for chips was dominated by 3Com and Intel. IIRC, these cards ran at some like 98% of wire speed. >>> >>> The parallel tasking thing allowed 3Com to register some very valuable patents that outlived the company. Patent trolls ended up exploiting them, which expert witnesses to collect some nice fees until they expired. >>> >>> 10BASE-T did away with CSMA-CD in favor of a full duplex transmission mode that buffered packets in the hub, did flow control, and ultimately allowed multiple frames to traverse switches at the same time, kinda like SDMA and beam forming in wireless systems. Hilariously, experts from Ivy League universities working for patent trolls continued to claim Ethernet was a half duplex, CSMA-CD system until the bitter end. >>> >>> RB >>> >>>> On Mar 30, 2019, at 2:33 PM, Jack Haverty wrote: >>>> >>>> The Unix-box problem was the only one I recall. However, we did move >>>> the testing world onto a separate LAN so anything bad that some random >>>> box did wouldn't affect everyone. So it may have happened but we didn't >>>> care. Our mission was to get the software working.... >>>> >>>> /Jack >>>> >>>> On 3/30/19 12:41 AM, Olivier MJ Cr?pin-Leblond wrote: >>>>> Dear Jack, >>>>> >>>>> I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000 >>>>> compatible cards on the same network. >>>>> Having started with Novell & 3COM cards, all on Coax, we found that we >>>>> started getting timeouts when we added more cheap NE2000 compatible cards. >>>>> Did the same thing with oscilloscopes/analysers and tweaked parameters >>>>> to go around this problem. >>>>> Warm regards, >>>>> >>>>> Olivier >>>>> >>>>> On 30/03/2019 02:57, Jack Haverty wrote: >>>>>> I can confirm that there was at least one Unix vendor that violated the >>>>>> Ethernet specs (10mb/s). I was at Oracle in the early 90s, where we had >>>>>> at least one of every common computer so that we could test software. >>>>>> >>>>>> While testing, we noticed that when one particular type of machine was >>>>>> active doing a long bulk transfer, all of the other traffic on our LAN >>>>>> slowed to a crawl. I was a hardware guy in a software universe, but I >>>>>> managed to find one other hardware type, and we scrounged up an >>>>>> oscilloscope, and then looked closely at the wire and at the spec. >>>>>> >>>>>> I don't remember the details, but there was some timer that was supposed >>>>>> to have a certain minimum value and that Unix box was consistently >>>>>> violating it. So it could effectively seize the LAN for as long as it >>>>>> had traffic. >>>>>> >>>>>> Sorry, I can't remember which vendor it was. It might have been Sun, or >>>>>> maybe one specific model/vintage, since we had a lot of Sun equipment >>>>>> but hadn't noticed the problem before. >>>>>> >>>>>> I suspect there's a lot of such "standards" that are routinely violated >>>>>> in the network. Putting it on paper and declaring it mandatory doesn't >>>>>> make it true. Personally I never saw much rigorous certification >>>>>> testing or enforcement (not just of Ethernet), and the general >>>>>> "robustness" designs can hide bad behavior. >>>>>> >>>>>> /Jack Haverty >>>>>> >>>>>> >>>>>> On 3/29/19 5:40 PM, John Gilmore wrote: >>>>>>> Karl Auerbach wrote: >>>>>>>> I recently had someone confirm a widely held belief that Sun >>>>>>>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces >>>>>>>> to have a winning bias against Ethernet machines that adhered to the >>>>>>>> IEEE/DIX ethernet timer values. Those of us who tended to work with >>>>>>>> networked PC platforms were well aware of the effect of putting a Sun >>>>>>>> onto the same Ethernet: what had worked before stopped working, but >>>>>>>> the Suns all chatted among themselves quite happily. >>>>>>> Are we talking about 10 Mbit Ethernet, or something later? >>>>>>> >>>>>>> I worked at Sun back then. Sun was shipping products with Ethernet >>>>>>> before the IBM PC even existed. Sun products used standard Ethernet >>>>>>> chips. Some of those chips were super customizable via internal >>>>>>> registers (I have a T1 card that uses an Ethernet chip with settings >>>>>>> that let it talk telco T1/DS1 protocol!), but Sun always set them to >>>>>>> meet the standard specs. What evidence is there of any non-standard >>>>>>> settings? >>>>>>> >>>>>>> What Sun did differently was that we tuned the implementation so it >>>>>>> could actually send and receive back-to-back packets, at the minimum >>>>>>> specified inter-packet gaps. By building both the hardware and the >>>>>>> software ourselves (like Apple today, and unlike Microsoft), we were >>>>>>> able to work out all the kinks to maximize performance. We could >>>>>>> improve everything: software drivers, interrupt latencies, TCP/IP >>>>>>> stacks, DMA bus arbitration overhead. Sun was the first to do >>>>>>> production shared disk-drive access over Ethernet, to reduce the cost of >>>>>>> our "diskless" workstations. In sending 4Kbyte filesystem blocks among >>>>>>> client and server, we sent an IP-fragmented 4K+ UDP datagram in three >>>>>>> BACK-TO-BACK Ethernet packets. >>>>>>> >>>>>>> Someone, I think it was Van Jacobson, did some early work on maximizing >>>>>>> Ethernet thruput, and reported it at USENIX conferences. His >>>>>>> observation was that to get maximal thruput, you needed 3 things to be >>>>>>> happening absolutely simultaneously: the sender processing & queueing >>>>>>> the next packet; the Ethernet wire moving the current packet; the >>>>>>> receiver dequeueing and processing the previous packet. If any of these >>>>>>> operations took longer than the others, then that would be the limiting >>>>>>> factor in the thruput. This applies to half duplex operation (only one >>>>>>> side transmits at a time); the end node processing requirement doubles >>>>>>> if you run full duplex data in both directions (on more modern Ethernets >>>>>>> that support that) . >>>>>>> >>>>>>> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his >>>>>>> favorite things to work on was network performance. Here's one of his >>>>>>> signature blocks from 1996: >>>>>>> >>>>>>> Yow! 11.26 MB/s remote host TCP bandwidth & //// >>>>>>> 199 usec remote TCP latency over 100Mb/s //// >>>>>>> ethernet. Beat that! //// >>>>>>> -----------------------------------------////__________ o >>>>>>> David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< >>>>>>> >>>>>>> My guess is that the ISA cards of the day had never even *seen* back to >>>>>>> back Ethernet packets (with only the 9.6 uSec interframe spacing between >>>>>>> them), so of course they weren't tested to be able to handle them. The >>>>>>> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so >>>>>>> most cards just had one or two packet buffers. And if the CPU didn't >>>>>>> immediately grab one of those received buffers, then the next packet >>>>>>> would get dropped for lack of a buffer to put it in. In sending, you >>>>>>> had to have the second buffer queued long before the inter-packet gap, or >>>>>>> you wouldn't send with minimum packet spacing on the wire. Most PC >>>>>>> operating systems couldn't do that. And if your card was slower than >>>>>>> the standard 9.6usec inter-packet gap after sensing carrier, >>>>>>> then any Sun waiting to transmit would beat your card to the wire, >>>>>>> deferring your card's transmission. >>>>>>> >>>>>>> You may have also been seeing the "Channel capture effect"; see: >>>>>>> >>>>>>> https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect >>>>>>> >>>>>>> John >>>>>>> >>>>> >>>> _______ >>>> internet-history mailing list >>>> internet-history at postel.org >>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>> Contact list-owner at postel.org for assistance. >>> >>> ? >>> Richard Bennett >>> High Tech Forum Founder >>> Ethernet & Wi-Fi standards co-creator >>> >>> Internet Policy Consultant >>> >>> _______ >>> internet-history mailing list >>> internet-history at postel.org >>> http://mailman.postel.org/mailman/listinfo/internet-history >>> Contact list-owner at postel.org for assistance. >> >> >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. > > ? > Richard Bennett > High Tech Forum Founder > Ethernet & Wi-Fi standards co-creator > > Internet Policy Consultant > From richard at bennett.com Sun Mar 31 16:13:02 2019 From: richard at bennett.com (Richard Bennett) Date: Sun, 31 Mar 2019 17:13:02 -0600 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> Message-ID: At its peak, 3Com manufactured millions of NICs per week, 99.99% for twisted pair. RB > On Mar 31, 2019, at 4:56 PM, Scott O. Bradner wrote: > > hmm - we had a few hundred at Harvard at the peak so I find it hard to think that there were only a thousand world wide > > Scott > >> On Mar 31, 2019, at 6:43 PM, Richard Bennett wrote: >> >> Ethernet is a catchy name, I?ll give you that. >> >> 3Com quickly discovered that golden rod cable was a huge mistake and replaced it with thin coax, integrated transceivers, and BNC connectors. In reality, the number of golden rod installations in the whole world never numbered much more than thousand; that was good because there was no way to upgrade them to higher speeds. >> >>> On Mar 31, 2019, at 4:23 PM, Scott O. Bradner wrote: >>> >>> might have been crap but there sure was a lot of it, and it worked well enough to dominate the LAN space over >>> token ring >>> >>> 10BaseT (and the multiple pre-standard twisted pair Ethernet systems) expanded the coverage hugely but >>> a lot of the original yellow cable was deployed >>> >>> Scott >>> >>>> On Mar 31, 2019, at 5:10 PM, Richard Bennett wrote: >>>> >>>> Ethernet was total crap before 10BASE-T for many reasons. The original 3Com card, the 3C501, used a SEEQ 8001 chip that couldn?t handle incoming packets with less than 80ms interframe gap because it had to handle an interrupt and complete a DMA operation for the first packet before it could start the reception of the next one. The 3Com server NIC, the 3C-505, had an embedded CPU and first chip that actually supported the standard, the 82586. The server card added delay when it knew it was talking to 3C501 so as to avoid making it choke. >>>> >>>> AMD?s first Ethernet chip seeded its random number generator (need for the random exponential binary backoff algorithm) at power up, so a short power fail caused stations to synchronize, making collision resolution impossible. >>>> >>>> Most packets are generated by servers, so clients close to servers had higher priority than those farther away, an effect of propagation. >>>> >>>> All in all, coax Ethernet was a horrible design in practice. >>>> >>>> The first really commendable Ethernet chips were the 3Com 3C509 parallel tasking adapters. They took a linked list of buffer fragments from the driver and delivered packets direct to user space from a smallish FIFO in the chip. These cards also predicted interrupt latency, firing off the reception logic in the driver at a time calculated to have the driver running at or before EOF. This made for some fun code when the driver was ready for a packet that wasn?t quite there yet. >>>> >>>> The 509 hit the market in 1994, at a time when the tech press liked to run speed/CPU load tests of Ethernet cards and the market for chips was dominated by 3Com and Intel. IIRC, these cards ran at some like 98% of wire speed. >>>> >>>> The parallel tasking thing allowed 3Com to register some very valuable patents that outlived the company. Patent trolls ended up exploiting them, which expert witnesses to collect some nice fees until they expired. >>>> >>>> 10BASE-T did away with CSMA-CD in favor of a full duplex transmission mode that buffered packets in the hub, did flow control, and ultimately allowed multiple frames to traverse switches at the same time, kinda like SDMA and beam forming in wireless systems. Hilariously, experts from Ivy League universities working for patent trolls continued to claim Ethernet was a half duplex, CSMA-CD system until the bitter end. >>>> >>>> RB >>>> >>>>> On Mar 30, 2019, at 2:33 PM, Jack Haverty wrote: >>>>> >>>>> The Unix-box problem was the only one I recall. However, we did move >>>>> the testing world onto a separate LAN so anything bad that some random >>>>> box did wouldn't affect everyone. So it may have happened but we didn't >>>>> care. Our mission was to get the software working.... >>>>> >>>>> /Jack >>>>> >>>>> On 3/30/19 12:41 AM, Olivier MJ Cr?pin-Leblond wrote: >>>>>> Dear Jack, >>>>>> >>>>>> I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000 >>>>>> compatible cards on the same network. >>>>>> Having started with Novell & 3COM cards, all on Coax, we found that we >>>>>> started getting timeouts when we added more cheap NE2000 compatible cards. >>>>>> Did the same thing with oscilloscopes/analysers and tweaked parameters >>>>>> to go around this problem. >>>>>> Warm regards, >>>>>> >>>>>> Olivier >>>>>> >>>>>> On 30/03/2019 02:57, Jack Haverty wrote: >>>>>>> I can confirm that there was at least one Unix vendor that violated the >>>>>>> Ethernet specs (10mb/s). I was at Oracle in the early 90s, where we had >>>>>>> at least one of every common computer so that we could test software. >>>>>>> >>>>>>> While testing, we noticed that when one particular type of machine was >>>>>>> active doing a long bulk transfer, all of the other traffic on our LAN >>>>>>> slowed to a crawl. I was a hardware guy in a software universe, but I >>>>>>> managed to find one other hardware type, and we scrounged up an >>>>>>> oscilloscope, and then looked closely at the wire and at the spec. >>>>>>> >>>>>>> I don't remember the details, but there was some timer that was supposed >>>>>>> to have a certain minimum value and that Unix box was consistently >>>>>>> violating it. So it could effectively seize the LAN for as long as it >>>>>>> had traffic. >>>>>>> >>>>>>> Sorry, I can't remember which vendor it was. It might have been Sun, or >>>>>>> maybe one specific model/vintage, since we had a lot of Sun equipment >>>>>>> but hadn't noticed the problem before. >>>>>>> >>>>>>> I suspect there's a lot of such "standards" that are routinely violated >>>>>>> in the network. Putting it on paper and declaring it mandatory doesn't >>>>>>> make it true. Personally I never saw much rigorous certification >>>>>>> testing or enforcement (not just of Ethernet), and the general >>>>>>> "robustness" designs can hide bad behavior. >>>>>>> >>>>>>> /Jack Haverty >>>>>>> >>>>>>> >>>>>>> On 3/29/19 5:40 PM, John Gilmore wrote: >>>>>>>> Karl Auerbach wrote: >>>>>>>>> I recently had someone confirm a widely held belief that Sun >>>>>>>>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces >>>>>>>>> to have a winning bias against Ethernet machines that adhered to the >>>>>>>>> IEEE/DIX ethernet timer values. Those of us who tended to work with >>>>>>>>> networked PC platforms were well aware of the effect of putting a Sun >>>>>>>>> onto the same Ethernet: what had worked before stopped working, but >>>>>>>>> the Suns all chatted among themselves quite happily. >>>>>>>> Are we talking about 10 Mbit Ethernet, or something later? >>>>>>>> >>>>>>>> I worked at Sun back then. Sun was shipping products with Ethernet >>>>>>>> before the IBM PC even existed. Sun products used standard Ethernet >>>>>>>> chips. Some of those chips were super customizable via internal >>>>>>>> registers (I have a T1 card that uses an Ethernet chip with settings >>>>>>>> that let it talk telco T1/DS1 protocol!), but Sun always set them to >>>>>>>> meet the standard specs. What evidence is there of any non-standard >>>>>>>> settings? >>>>>>>> >>>>>>>> What Sun did differently was that we tuned the implementation so it >>>>>>>> could actually send and receive back-to-back packets, at the minimum >>>>>>>> specified inter-packet gaps. By building both the hardware and the >>>>>>>> software ourselves (like Apple today, and unlike Microsoft), we were >>>>>>>> able to work out all the kinks to maximize performance. We could >>>>>>>> improve everything: software drivers, interrupt latencies, TCP/IP >>>>>>>> stacks, DMA bus arbitration overhead. Sun was the first to do >>>>>>>> production shared disk-drive access over Ethernet, to reduce the cost of >>>>>>>> our "diskless" workstations. In sending 4Kbyte filesystem blocks among >>>>>>>> client and server, we sent an IP-fragmented 4K+ UDP datagram in three >>>>>>>> BACK-TO-BACK Ethernet packets. >>>>>>>> >>>>>>>> Someone, I think it was Van Jacobson, did some early work on maximizing >>>>>>>> Ethernet thruput, and reported it at USENIX conferences. His >>>>>>>> observation was that to get maximal thruput, you needed 3 things to be >>>>>>>> happening absolutely simultaneously: the sender processing & queueing >>>>>>>> the next packet; the Ethernet wire moving the current packet; the >>>>>>>> receiver dequeueing and processing the previous packet. If any of these >>>>>>>> operations took longer than the others, then that would be the limiting >>>>>>>> factor in the thruput. This applies to half duplex operation (only one >>>>>>>> side transmits at a time); the end node processing requirement doubles >>>>>>>> if you run full duplex data in both directions (on more modern Ethernets >>>>>>>> that support that) . >>>>>>>> >>>>>>>> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his >>>>>>>> favorite things to work on was network performance. Here's one of his >>>>>>>> signature blocks from 1996: >>>>>>>> >>>>>>>> Yow! 11.26 MB/s remote host TCP bandwidth & //// >>>>>>>> 199 usec remote TCP latency over 100Mb/s //// >>>>>>>> ethernet. Beat that! //// >>>>>>>> -----------------------------------------////__________ o >>>>>>>> David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< >>>>>>>> >>>>>>>> My guess is that the ISA cards of the day had never even *seen* back to >>>>>>>> back Ethernet packets (with only the 9.6 uSec interframe spacing between >>>>>>>> them), so of course they weren't tested to be able to handle them. The >>>>>>>> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so >>>>>>>> most cards just had one or two packet buffers. And if the CPU didn't >>>>>>>> immediately grab one of those received buffers, then the next packet >>>>>>>> would get dropped for lack of a buffer to put it in. In sending, you >>>>>>>> had to have the second buffer queued long before the inter-packet gap, or >>>>>>>> you wouldn't send with minimum packet spacing on the wire. Most PC >>>>>>>> operating systems couldn't do that. And if your card was slower than >>>>>>>> the standard 9.6usec inter-packet gap after sensing carrier, >>>>>>>> then any Sun waiting to transmit would beat your card to the wire, >>>>>>>> deferring your card's transmission. >>>>>>>> >>>>>>>> You may have also been seeing the "Channel capture effect"; see: >>>>>>>> >>>>>>>> https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect >>>>>>>> >>>>>>>> John >>>>>>>> >>>>>> >>>>> _______ >>>>> internet-history mailing list >>>>> internet-history at postel.org >>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>> Contact list-owner at postel.org for assistance. >>>> >>>> ? >>>> Richard Bennett >>>> High Tech Forum Founder >>>> Ethernet & Wi-Fi standards co-creator >>>> >>>> Internet Policy Consultant >>>> >>>> _______ >>>> internet-history mailing list >>>> internet-history at postel.org >>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>> Contact list-owner at postel.org for assistance. >>> >>> >>> _______ >>> internet-history mailing list >>> internet-history at postel.org >>> http://mailman.postel.org/mailman/listinfo/internet-history >>> Contact list-owner at postel.org for assistance. >> >> ? >> Richard Bennett >> High Tech Forum Founder >> Ethernet & Wi-Fi standards co-creator >> >> Internet Policy Consultant >> > ? Richard Bennett High Tech Forum Founder Ethernet & Wi-Fi standards co-creator Internet Policy Consultant -------------- next part -------------- An HTML attachment was scrubbed... URL: From galmes at tamu.edu Sun Mar 31 16:29:33 2019 From: galmes at tamu.edu (Guy Almes) Date: Sun, 31 Mar 2019 19:29:33 -0400 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> Message-ID: <6e7425d0-d410-457e-6c3e-73b88472e368@tamu.edu> I wonder how much yellow coax is still in the ceilings of university buildings around the world, -- Guy On 3/31/19 18:56, Scott O. Bradner wrote: > hmm - we had a few hundred at Harvard at the peak so I find it hard to think that there were only a thousand world wide > > Scott > >> On Mar 31, 2019, at 6:43 PM, Richard Bennett wrote: >> >> Ethernet is a catchy name, I?ll give you that. >> >> 3Com quickly discovered that golden rod cable was a huge mistake and replaced it with thin coax, integrated transceivers, and BNC connectors. In reality, the number of golden rod installations in the whole world never numbered much more than thousand; that was good because there was no way to upgrade them to higher speeds. >> >>> On Mar 31, 2019, at 4:23 PM, Scott O. Bradner wrote: >>> >>> might have been crap but there sure was a lot of it, and it worked well enough to dominate the LAN space over >>> token ring >>> >>> 10BaseT (and the multiple pre-standard twisted pair Ethernet systems) expanded the coverage hugely but >>> a lot of the original yellow cable was deployed >>> >>> Scott >>> >>>> On Mar 31, 2019, at 5:10 PM, Richard Bennett wrote: >>>> >>>> Ethernet was total crap before 10BASE-T for many reasons. The original 3Com card, the 3C501, used a SEEQ 8001 chip that couldn?t handle incoming packets with less than 80ms interframe gap because it had to handle an interrupt and complete a DMA operation for the first packet before it could start the reception of the next one. The 3Com server NIC, the 3C-505, had an embedded CPU and first chip that actually supported the standard, the 82586. The server card added delay when it knew it was talking to 3C501 so as to avoid making it choke. >>>> >>>> AMD?s first Ethernet chip seeded its random number generator (need for the random exponential binary backoff algorithm) at power up, so a short power fail caused stations to synchronize, making collision resolution impossible. >>>> >>>> Most packets are generated by servers, so clients close to servers had higher priority than those farther away, an effect of propagation. >>>> >>>> All in all, coax Ethernet was a horrible design in practice. >>>> >>>> The first really commendable Ethernet chips were the 3Com 3C509 parallel tasking adapters. They took a linked list of buffer fragments from the driver and delivered packets direct to user space from a smallish FIFO in the chip. These cards also predicted interrupt latency, firing off the reception logic in the driver at a time calculated to have the driver running at or before EOF. This made for some fun code when the driver was ready for a packet that wasn?t quite there yet. >>>> >>>> The 509 hit the market in 1994, at a time when the tech press liked to run speed/CPU load tests of Ethernet cards and the market for chips was dominated by 3Com and Intel. IIRC, these cards ran at some like 98% of wire speed. >>>> >>>> The parallel tasking thing allowed 3Com to register some very valuable patents that outlived the company. Patent trolls ended up exploiting them, which expert witnesses to collect some nice fees until they expired. >>>> >>>> 10BASE-T did away with CSMA-CD in favor of a full duplex transmission mode that buffered packets in the hub, did flow control, and ultimately allowed multiple frames to traverse switches at the same time, kinda like SDMA and beam forming in wireless systems. Hilariously, experts from Ivy League universities working for patent trolls continued to claim Ethernet was a half duplex, CSMA-CD system until the bitter end. >>>> >>>> RB >>>> >>>>> On Mar 30, 2019, at 2:33 PM, Jack Haverty wrote: >>>>> >>>>> The Unix-box problem was the only one I recall. However, we did move >>>>> the testing world onto a separate LAN so anything bad that some random >>>>> box did wouldn't affect everyone. So it may have happened but we didn't >>>>> care. Our mission was to get the software working.... >>>>> >>>>> /Jack >>>>> >>>>> On 3/30/19 12:41 AM, Olivier MJ Cr?pin-Leblond wrote: >>>>>> Dear Jack, >>>>>> >>>>>> I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000 >>>>>> compatible cards on the same network. >>>>>> Having started with Novell & 3COM cards, all on Coax, we found that we >>>>>> started getting timeouts when we added more cheap NE2000 compatible cards. >>>>>> Did the same thing with oscilloscopes/analysers and tweaked parameters >>>>>> to go around this problem. >>>>>> Warm regards, >>>>>> >>>>>> Olivier >>>>>> >>>>>> On 30/03/2019 02:57, Jack Haverty wrote: >>>>>>> I can confirm that there was at least one Unix vendor that violated the >>>>>>> Ethernet specs (10mb/s). I was at Oracle in the early 90s, where we had >>>>>>> at least one of every common computer so that we could test software. >>>>>>> >>>>>>> While testing, we noticed that when one particular type of machine was >>>>>>> active doing a long bulk transfer, all of the other traffic on our LAN >>>>>>> slowed to a crawl. I was a hardware guy in a software universe, but I >>>>>>> managed to find one other hardware type, and we scrounged up an >>>>>>> oscilloscope, and then looked closely at the wire and at the spec. >>>>>>> >>>>>>> I don't remember the details, but there was some timer that was supposed >>>>>>> to have a certain minimum value and that Unix box was consistently >>>>>>> violating it. So it could effectively seize the LAN for as long as it >>>>>>> had traffic. >>>>>>> >>>>>>> Sorry, I can't remember which vendor it was. It might have been Sun, or >>>>>>> maybe one specific model/vintage, since we had a lot of Sun equipment >>>>>>> but hadn't noticed the problem before. >>>>>>> >>>>>>> I suspect there's a lot of such "standards" that are routinely violated >>>>>>> in the network. Putting it on paper and declaring it mandatory doesn't >>>>>>> make it true. Personally I never saw much rigorous certification >>>>>>> testing or enforcement (not just of Ethernet), and the general >>>>>>> "robustness" designs can hide bad behavior. >>>>>>> >>>>>>> /Jack Haverty >>>>>>> >>>>>>> >>>>>>> On 3/29/19 5:40 PM, John Gilmore wrote: >>>>>>>> Karl Auerbach wrote: >>>>>>>>> I recently had someone confirm a widely held belief that Sun >>>>>>>>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces >>>>>>>>> to have a winning bias against Ethernet machines that adhered to the >>>>>>>>> IEEE/DIX ethernet timer values. Those of us who tended to work with >>>>>>>>> networked PC platforms were well aware of the effect of putting a Sun >>>>>>>>> onto the same Ethernet: what had worked before stopped working, but >>>>>>>>> the Suns all chatted among themselves quite happily. >>>>>>>> Are we talking about 10 Mbit Ethernet, or something later? >>>>>>>> >>>>>>>> I worked at Sun back then. Sun was shipping products with Ethernet >>>>>>>> before the IBM PC even existed. Sun products used standard Ethernet >>>>>>>> chips. Some of those chips were super customizable via internal >>>>>>>> registers (I have a T1 card that uses an Ethernet chip with settings >>>>>>>> that let it talk telco T1/DS1 protocol!), but Sun always set them to >>>>>>>> meet the standard specs. What evidence is there of any non-standard >>>>>>>> settings? >>>>>>>> >>>>>>>> What Sun did differently was that we tuned the implementation so it >>>>>>>> could actually send and receive back-to-back packets, at the minimum >>>>>>>> specified inter-packet gaps. By building both the hardware and the >>>>>>>> software ourselves (like Apple today, and unlike Microsoft), we were >>>>>>>> able to work out all the kinks to maximize performance. We could >>>>>>>> improve everything: software drivers, interrupt latencies, TCP/IP >>>>>>>> stacks, DMA bus arbitration overhead. Sun was the first to do >>>>>>>> production shared disk-drive access over Ethernet, to reduce the cost of >>>>>>>> our "diskless" workstations. In sending 4Kbyte filesystem blocks among >>>>>>>> client and server, we sent an IP-fragmented 4K+ UDP datagram in three >>>>>>>> BACK-TO-BACK Ethernet packets. >>>>>>>> >>>>>>>> Someone, I think it was Van Jacobson, did some early work on maximizing >>>>>>>> Ethernet thruput, and reported it at USENIX conferences. His >>>>>>>> observation was that to get maximal thruput, you needed 3 things to be >>>>>>>> happening absolutely simultaneously: the sender processing & queueing >>>>>>>> the next packet; the Ethernet wire moving the current packet; the >>>>>>>> receiver dequeueing and processing the previous packet. If any of these >>>>>>>> operations took longer than the others, then that would be the limiting >>>>>>>> factor in the thruput. This applies to half duplex operation (only one >>>>>>>> side transmits at a time); the end node processing requirement doubles >>>>>>>> if you run full duplex data in both directions (on more modern Ethernets >>>>>>>> that support that) . >>>>>>>> >>>>>>>> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his >>>>>>>> favorite things to work on was network performance. Here's one of his >>>>>>>> signature blocks from 1996: >>>>>>>> >>>>>>>> Yow! 11.26 MB/s remote host TCP bandwidth & //// >>>>>>>> 199 usec remote TCP latency over 100Mb/s //// >>>>>>>> ethernet. Beat that! //// >>>>>>>> -----------------------------------------////__________ o >>>>>>>> David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< >>>>>>>> >>>>>>>> My guess is that the ISA cards of the day had never even *seen* back to >>>>>>>> back Ethernet packets (with only the 9.6 uSec interframe spacing between >>>>>>>> them), so of course they weren't tested to be able to handle them. The >>>>>>>> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so >>>>>>>> most cards just had one or two packet buffers. And if the CPU didn't >>>>>>>> immediately grab one of those received buffers, then the next packet >>>>>>>> would get dropped for lack of a buffer to put it in. In sending, you >>>>>>>> had to have the second buffer queued long before the inter-packet gap, or >>>>>>>> you wouldn't send with minimum packet spacing on the wire. Most PC >>>>>>>> operating systems couldn't do that. And if your card was slower than >>>>>>>> the standard 9.6usec inter-packet gap after sensing carrier, >>>>>>>> then any Sun waiting to transmit would beat your card to the wire, >>>>>>>> deferring your card's transmission. >>>>>>>> >>>>>>>> You may have also been seeing the "Channel capture effect"; see: >>>>>>>> >>>>>>>> https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect >>>>>>>> >>>>>>>> John >>>>>>>> >>>>>> >>>>> _______ >>>>> internet-history mailing list >>>>> internet-history at postel.org >>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>> Contact list-owner at postel.org for assistance. >>>> >>>> ? >>>> Richard Bennett >>>> High Tech Forum Founder >>>> Ethernet & Wi-Fi standards co-creator >>>> >>>> Internet Policy Consultant >>>> >>>> _______ >>>> internet-history mailing list >>>> internet-history at postel.org >>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>> Contact list-owner at postel.org for assistance. >>> >>> >>> _______ >>> internet-history mailing list >>> internet-history at postel.org >>> http://mailman.postel.org/mailman/listinfo/internet-history >>> Contact list-owner at postel.org for assistance. >> >> ? >> Richard Bennett >> High Tech Forum Founder >> Ethernet & Wi-Fi standards co-creator >> >> Internet Policy Consultant >> > > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > From brian.e.carpenter at gmail.com Sun Mar 31 17:25:08 2019 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 1 Apr 2019 13:25:08 +1300 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> Message-ID: <41917325-958c-3289-feb6-74d4aa1bd025@gmail.com> I don't know the numbers but we had quite a lot at CERN, and several technicians who became experts at adding a tap. I think Boeing had a lot too, and Microsoft. I do agree that CheaperNet made scaling up a lot easier and made everything more user-proof. Although we did once have a user (i.e. a physicist) who discovered ungrounded screens on a bunch of CheaperNet coax cables, and soldered them all to ground. Of course that created numerous ground loops and broke everything in his area, since the coax screen should only be grounded at one end. (There were some areas of CERN where you could measure ground currents of 30 or 40 amps AC, due to some very, very big electromagnets that inevitably unbalanced the 3-phase system.) That particular user later became head of CERN's IT Division. Regards Brian Carpenter On 01-Apr-19 11:56, Scott O. Bradner wrote: > hmm - we had a few hundred at Harvard at the peak so I find it hard to think that there were only a thousand world wide > > Scott > >> On Mar 31, 2019, at 6:43 PM, Richard Bennett wrote: >> >> Ethernet is a catchy name, I?ll give you that. >> >> 3Com quickly discovered that golden rod cable was a huge mistake and replaced it with thin coax, integrated transceivers, and BNC connectors. In reality, the number of golden rod installations in the whole world never numbered much more than thousand; that was good because there was no way to upgrade them to higher speeds. >> >>> On Mar 31, 2019, at 4:23 PM, Scott O. Bradner wrote: >>> >>> might have been crap but there sure was a lot of it, and it worked well enough to dominate the LAN space over >>> token ring >>> >>> 10BaseT (and the multiple pre-standard twisted pair Ethernet systems) expanded the coverage hugely but >>> a lot of the original yellow cable was deployed >>> >>> Scott >>> >>>> On Mar 31, 2019, at 5:10 PM, Richard Bennett wrote: >>>> >>>> Ethernet was total crap before 10BASE-T for many reasons. The original 3Com card, the 3C501, used a SEEQ 8001 chip that couldn?t handle incoming packets with less than 80ms interframe gap because it had to handle an interrupt and complete a DMA operation for the first packet before it could start the reception of the next one. The 3Com server NIC, the 3C-505, had an embedded CPU and first chip that actually supported the standard, the 82586. The server card added delay when it knew it was talking to 3C501 so as to avoid making it choke. >>>> >>>> AMD?s first Ethernet chip seeded its random number generator (need for the random exponential binary backoff algorithm) at power up, so a short power fail caused stations to synchronize, making collision resolution impossible. >>>> >>>> Most packets are generated by servers, so clients close to servers had higher priority than those farther away, an effect of propagation. >>>> >>>> All in all, coax Ethernet was a horrible design in practice. >>>> >>>> The first really commendable Ethernet chips were the 3Com 3C509 parallel tasking adapters. They took a linked list of buffer fragments from the driver and delivered packets direct to user space from a smallish FIFO in the chip. These cards also predicted interrupt latency, firing off the reception logic in the driver at a time calculated to have the driver running at or before EOF. This made for some fun code when the driver was ready for a packet that wasn?t quite there yet. >>>> >>>> The 509 hit the market in 1994, at a time when the tech press liked to run speed/CPU load tests of Ethernet cards and the market for chips was dominated by 3Com and Intel. IIRC, these cards ran at some like 98% of wire speed. >>>> >>>> The parallel tasking thing allowed 3Com to register some very valuable patents that outlived the company. Patent trolls ended up exploiting them, which expert witnesses to collect some nice fees until they expired. >>>> >>>> 10BASE-T did away with CSMA-CD in favor of a full duplex transmission mode that buffered packets in the hub, did flow control, and ultimately allowed multiple frames to traverse switches at the same time, kinda like SDMA and beam forming in wireless systems. Hilariously, experts from Ivy League universities working for patent trolls continued to claim Ethernet was a half duplex, CSMA-CD system until the bitter end. >>>> >>>> RB >>>> >>>>> On Mar 30, 2019, at 2:33 PM, Jack Haverty wrote: >>>>> >>>>> The Unix-box problem was the only one I recall. However, we did move >>>>> the testing world onto a separate LAN so anything bad that some random >>>>> box did wouldn't affect everyone. So it may have happened but we didn't >>>>> care. Our mission was to get the software working.... >>>>> >>>>> /Jack >>>>> >>>>> On 3/30/19 12:41 AM, Olivier MJ Cr?pin-Leblond wrote: >>>>>> Dear Jack, >>>>>> >>>>>> I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000 >>>>>> compatible cards on the same network. >>>>>> Having started with Novell & 3COM cards, all on Coax, we found that we >>>>>> started getting timeouts when we added more cheap NE2000 compatible cards. >>>>>> Did the same thing with oscilloscopes/analysers and tweaked parameters >>>>>> to go around this problem. >>>>>> Warm regards, >>>>>> >>>>>> Olivier >>>>>> >>>>>> On 30/03/2019 02:57, Jack Haverty wrote: >>>>>>> I can confirm that there was at least one Unix vendor that violated the >>>>>>> Ethernet specs (10mb/s). I was at Oracle in the early 90s, where we had >>>>>>> at least one of every common computer so that we could test software. >>>>>>> >>>>>>> While testing, we noticed that when one particular type of machine was >>>>>>> active doing a long bulk transfer, all of the other traffic on our LAN >>>>>>> slowed to a crawl. I was a hardware guy in a software universe, but I >>>>>>> managed to find one other hardware type, and we scrounged up an >>>>>>> oscilloscope, and then looked closely at the wire and at the spec. >>>>>>> >>>>>>> I don't remember the details, but there was some timer that was supposed >>>>>>> to have a certain minimum value and that Unix box was consistently >>>>>>> violating it. So it could effectively seize the LAN for as long as it >>>>>>> had traffic. >>>>>>> >>>>>>> Sorry, I can't remember which vendor it was. It might have been Sun, or >>>>>>> maybe one specific model/vintage, since we had a lot of Sun equipment >>>>>>> but hadn't noticed the problem before. >>>>>>> >>>>>>> I suspect there's a lot of such "standards" that are routinely violated >>>>>>> in the network. Putting it on paper and declaring it mandatory doesn't >>>>>>> make it true. Personally I never saw much rigorous certification >>>>>>> testing or enforcement (not just of Ethernet), and the general >>>>>>> "robustness" designs can hide bad behavior. >>>>>>> >>>>>>> /Jack Haverty >>>>>>> >>>>>>> >>>>>>> On 3/29/19 5:40 PM, John Gilmore wrote: >>>>>>>> Karl Auerbach wrote: >>>>>>>>> I recently had someone confirm a widely held belief that Sun >>>>>>>>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces >>>>>>>>> to have a winning bias against Ethernet machines that adhered to the >>>>>>>>> IEEE/DIX ethernet timer values. Those of us who tended to work with >>>>>>>>> networked PC platforms were well aware of the effect of putting a Sun >>>>>>>>> onto the same Ethernet: what had worked before stopped working, but >>>>>>>>> the Suns all chatted among themselves quite happily. >>>>>>>> Are we talking about 10 Mbit Ethernet, or something later? >>>>>>>> >>>>>>>> I worked at Sun back then. Sun was shipping products with Ethernet >>>>>>>> before the IBM PC even existed. Sun products used standard Ethernet >>>>>>>> chips. Some of those chips were super customizable via internal >>>>>>>> registers (I have a T1 card that uses an Ethernet chip with settings >>>>>>>> that let it talk telco T1/DS1 protocol!), but Sun always set them to >>>>>>>> meet the standard specs. What evidence is there of any non-standard >>>>>>>> settings? >>>>>>>> >>>>>>>> What Sun did differently was that we tuned the implementation so it >>>>>>>> could actually send and receive back-to-back packets, at the minimum >>>>>>>> specified inter-packet gaps. By building both the hardware and the >>>>>>>> software ourselves (like Apple today, and unlike Microsoft), we were >>>>>>>> able to work out all the kinks to maximize performance. We could >>>>>>>> improve everything: software drivers, interrupt latencies, TCP/IP >>>>>>>> stacks, DMA bus arbitration overhead. Sun was the first to do >>>>>>>> production shared disk-drive access over Ethernet, to reduce the cost of >>>>>>>> our "diskless" workstations. In sending 4Kbyte filesystem blocks among >>>>>>>> client and server, we sent an IP-fragmented 4K+ UDP datagram in three >>>>>>>> BACK-TO-BACK Ethernet packets. >>>>>>>> >>>>>>>> Someone, I think it was Van Jacobson, did some early work on maximizing >>>>>>>> Ethernet thruput, and reported it at USENIX conferences. His >>>>>>>> observation was that to get maximal thruput, you needed 3 things to be >>>>>>>> happening absolutely simultaneously: the sender processing & queueing >>>>>>>> the next packet; the Ethernet wire moving the current packet; the >>>>>>>> receiver dequeueing and processing the previous packet. If any of these >>>>>>>> operations took longer than the others, then that would be the limiting >>>>>>>> factor in the thruput. This applies to half duplex operation (only one >>>>>>>> side transmits at a time); the end node processing requirement doubles >>>>>>>> if you run full duplex data in both directions (on more modern Ethernets >>>>>>>> that support that) . >>>>>>>> >>>>>>>> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his >>>>>>>> favorite things to work on was network performance. Here's one of his >>>>>>>> signature blocks from 1996: >>>>>>>> >>>>>>>> Yow! 11.26 MB/s remote host TCP bandwidth & //// >>>>>>>> 199 usec remote TCP latency over 100Mb/s //// >>>>>>>> ethernet. Beat that! //// >>>>>>>> -----------------------------------------////__________ o >>>>>>>> David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< >>>>>>>> >>>>>>>> My guess is that the ISA cards of the day had never even *seen* back to >>>>>>>> back Ethernet packets (with only the 9.6 uSec interframe spacing between >>>>>>>> them), so of course they weren't tested to be able to handle them. The >>>>>>>> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so >>>>>>>> most cards just had one or two packet buffers. And if the CPU didn't >>>>>>>> immediately grab one of those received buffers, then the next packet >>>>>>>> would get dropped for lack of a buffer to put it in. In sending, you >>>>>>>> had to have the second buffer queued long before the inter-packet gap, or >>>>>>>> you wouldn't send with minimum packet spacing on the wire. Most PC >>>>>>>> operating systems couldn't do that. And if your card was slower than >>>>>>>> the standard 9.6usec inter-packet gap after sensing carrier, >>>>>>>> then any Sun waiting to transmit would beat your card to the wire, >>>>>>>> deferring your card's transmission. >>>>>>>> >>>>>>>> You may have also been seeing the "Channel capture effect"; see: >>>>>>>> >>>>>>>> https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect >>>>>>>> >>>>>>>> John >>>>>>>> >>>>>> >>>>> _______ >>>>> internet-history mailing list >>>>> internet-history at postel.org >>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>> Contact list-owner at postel.org for assistance. >>>> >>>> ? >>>> Richard Bennett >>>> High Tech Forum Founder >>>> Ethernet & Wi-Fi standards co-creator >>>> >>>> Internet Policy Consultant >>>> >>>> _______ >>>> internet-history mailing list >>>> internet-history at postel.org >>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>> Contact list-owner at postel.org for assistance. >>> >>> >>> _______ >>> internet-history mailing list >>> internet-history at postel.org >>> http://mailman.postel.org/mailman/listinfo/internet-history >>> Contact list-owner at postel.org for assistance. >> >> ? >> Richard Bennett >> High Tech Forum Founder >> Ethernet & Wi-Fi standards co-creator >> >> Internet Policy Consultant >> > > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > From sob at sobco.com Sun Mar 31 19:02:57 2019 From: sob at sobco.com (Scott O. Bradner) Date: Sun, 31 Mar 2019 22:02:57 -0400 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> Message-ID: <684612DD-B93E-41E3-97F7-3204ADA491BC@sobco.com> sure - after twisted pair was standardized > On Mar 31, 2019, at 7:13 PM, Richard Bennett wrote: > > At its peak, 3Com manufactured millions of NICs per week, 99.99% for twisted pair. > > RB > >> On Mar 31, 2019, at 4:56 PM, Scott O. Bradner wrote: >> >> hmm - we had a few hundred at Harvard at the peak so I find it hard to think that there were only a thousand world wide >> >> Scott >> >>> On Mar 31, 2019, at 6:43 PM, Richard Bennett wrote: >>> >>> Ethernet is a catchy name, I?ll give you that. >>> >>> 3Com quickly discovered that golden rod cable was a huge mistake and replaced it with thin coax, integrated transceivers, and BNC connectors. In reality, the number of golden rod installations in the whole world never numbered much more than thousand; that was good because there was no way to upgrade them to higher speeds. >>> >>>> On Mar 31, 2019, at 4:23 PM, Scott O. Bradner wrote: >>>> >>>> might have been crap but there sure was a lot of it, and it worked well enough to dominate the LAN space over >>>> token ring >>>> >>>> 10BaseT (and the multiple pre-standard twisted pair Ethernet systems) expanded the coverage hugely but >>>> a lot of the original yellow cable was deployed >>>> >>>> Scott >>>> >>>>> On Mar 31, 2019, at 5:10 PM, Richard Bennett wrote: >>>>> >>>>> Ethernet was total crap before 10BASE-T for many reasons. The original 3Com card, the 3C501, used a SEEQ 8001 chip that couldn?t handle incoming packets with less than 80ms interframe gap because it had to handle an interrupt and complete a DMA operation for the first packet before it could start the reception of the next one. The 3Com server NIC, the 3C-505, had an embedded CPU and first chip that actually supported the standard, the 82586. The server card added delay when it knew it was talking to 3C501 so as to avoid making it choke. >>>>> >>>>> AMD?s first Ethernet chip seeded its random number generator (need for the random exponential binary backoff algorithm) at power up, so a short power fail caused stations to synchronize, making collision resolution impossible. >>>>> >>>>> Most packets are generated by servers, so clients close to servers had higher priority than those farther away, an effect of propagation. >>>>> >>>>> All in all, coax Ethernet was a horrible design in practice. >>>>> >>>>> The first really commendable Ethernet chips were the 3Com 3C509 parallel tasking adapters. They took a linked list of buffer fragments from the driver and delivered packets direct to user space from a smallish FIFO in the chip. These cards also predicted interrupt latency, firing off the reception logic in the driver at a time calculated to have the driver running at or before EOF. This made for some fun code when the driver was ready for a packet that wasn?t quite there yet. >>>>> >>>>> The 509 hit the market in 1994, at a time when the tech press liked to run speed/CPU load tests of Ethernet cards and the market for chips was dominated by 3Com and Intel. IIRC, these cards ran at some like 98% of wire speed. >>>>> >>>>> The parallel tasking thing allowed 3Com to register some very valuable patents that outlived the company. Patent trolls ended up exploiting them, which expert witnesses to collect some nice fees until they expired. >>>>> >>>>> 10BASE-T did away with CSMA-CD in favor of a full duplex transmission mode that buffered packets in the hub, did flow control, and ultimately allowed multiple frames to traverse switches at the same time, kinda like SDMA and beam forming in wireless systems. Hilariously, experts from Ivy League universities working for patent trolls continued to claim Ethernet was a half duplex, CSMA-CD system until the bitter end. >>>>> >>>>> RB >>>>> >>>>>> On Mar 30, 2019, at 2:33 PM, Jack Haverty wrote: >>>>>> >>>>>> The Unix-box problem was the only one I recall. However, we did move >>>>>> the testing world onto a separate LAN so anything bad that some random >>>>>> box did wouldn't affect everyone. So it may have happened but we didn't >>>>>> care. Our mission was to get the software working.... >>>>>> >>>>>> /Jack >>>>>> >>>>>> On 3/30/19 12:41 AM, Olivier MJ Cr?pin-Leblond wrote: >>>>>>> Dear Jack, >>>>>>> >>>>>>> I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000 >>>>>>> compatible cards on the same network. >>>>>>> Having started with Novell & 3COM cards, all on Coax, we found that we >>>>>>> started getting timeouts when we added more cheap NE2000 compatible cards. >>>>>>> Did the same thing with oscilloscopes/analysers and tweaked parameters >>>>>>> to go around this problem. >>>>>>> Warm regards, >>>>>>> >>>>>>> Olivier >>>>>>> >>>>>>> On 30/03/2019 02:57, Jack Haverty wrote: >>>>>>>> I can confirm that there was at least one Unix vendor that violated the >>>>>>>> Ethernet specs (10mb/s). I was at Oracle in the early 90s, where we had >>>>>>>> at least one of every common computer so that we could test software. >>>>>>>> >>>>>>>> While testing, we noticed that when one particular type of machine was >>>>>>>> active doing a long bulk transfer, all of the other traffic on our LAN >>>>>>>> slowed to a crawl. I was a hardware guy in a software universe, but I >>>>>>>> managed to find one other hardware type, and we scrounged up an >>>>>>>> oscilloscope, and then looked closely at the wire and at the spec. >>>>>>>> >>>>>>>> I don't remember the details, but there was some timer that was supposed >>>>>>>> to have a certain minimum value and that Unix box was consistently >>>>>>>> violating it. So it could effectively seize the LAN for as long as it >>>>>>>> had traffic. >>>>>>>> >>>>>>>> Sorry, I can't remember which vendor it was. It might have been Sun, or >>>>>>>> maybe one specific model/vintage, since we had a lot of Sun equipment >>>>>>>> but hadn't noticed the problem before. >>>>>>>> >>>>>>>> I suspect there's a lot of such "standards" that are routinely violated >>>>>>>> in the network. Putting it on paper and declaring it mandatory doesn't >>>>>>>> make it true. Personally I never saw much rigorous certification >>>>>>>> testing or enforcement (not just of Ethernet), and the general >>>>>>>> "robustness" designs can hide bad behavior. >>>>>>>> >>>>>>>> /Jack Haverty >>>>>>>> >>>>>>>> >>>>>>>> On 3/29/19 5:40 PM, John Gilmore wrote: >>>>>>>>> Karl Auerbach wrote: >>>>>>>>>> I recently had someone confirm a widely held belief that Sun >>>>>>>>>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces >>>>>>>>>> to have a winning bias against Ethernet machines that adhered to the >>>>>>>>>> IEEE/DIX ethernet timer values. Those of us who tended to work with >>>>>>>>>> networked PC platforms were well aware of the effect of putting a Sun >>>>>>>>>> onto the same Ethernet: what had worked before stopped working, but >>>>>>>>>> the Suns all chatted among themselves quite happily. >>>>>>>>> Are we talking about 10 Mbit Ethernet, or something later? >>>>>>>>> >>>>>>>>> I worked at Sun back then. Sun was shipping products with Ethernet >>>>>>>>> before the IBM PC even existed. Sun products used standard Ethernet >>>>>>>>> chips. Some of those chips were super customizable via internal >>>>>>>>> registers (I have a T1 card that uses an Ethernet chip with settings >>>>>>>>> that let it talk telco T1/DS1 protocol!), but Sun always set them to >>>>>>>>> meet the standard specs. What evidence is there of any non-standard >>>>>>>>> settings? >>>>>>>>> >>>>>>>>> What Sun did differently was that we tuned the implementation so it >>>>>>>>> could actually send and receive back-to-back packets, at the minimum >>>>>>>>> specified inter-packet gaps. By building both the hardware and the >>>>>>>>> software ourselves (like Apple today, and unlike Microsoft), we were >>>>>>>>> able to work out all the kinks to maximize performance. We could >>>>>>>>> improve everything: software drivers, interrupt latencies, TCP/IP >>>>>>>>> stacks, DMA bus arbitration overhead. Sun was the first to do >>>>>>>>> production shared disk-drive access over Ethernet, to reduce the cost of >>>>>>>>> our "diskless" workstations. In sending 4Kbyte filesystem blocks among >>>>>>>>> client and server, we sent an IP-fragmented 4K+ UDP datagram in three >>>>>>>>> BACK-TO-BACK Ethernet packets. >>>>>>>>> >>>>>>>>> Someone, I think it was Van Jacobson, did some early work on maximizing >>>>>>>>> Ethernet thruput, and reported it at USENIX conferences. His >>>>>>>>> observation was that to get maximal thruput, you needed 3 things to be >>>>>>>>> happening absolutely simultaneously: the sender processing & queueing >>>>>>>>> the next packet; the Ethernet wire moving the current packet; the >>>>>>>>> receiver dequeueing and processing the previous packet. If any of these >>>>>>>>> operations took longer than the others, then that would be the limiting >>>>>>>>> factor in the thruput. This applies to half duplex operation (only one >>>>>>>>> side transmits at a time); the end node processing requirement doubles >>>>>>>>> if you run full duplex data in both directions (on more modern Ethernets >>>>>>>>> that support that) . >>>>>>>>> >>>>>>>>> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his >>>>>>>>> favorite things to work on was network performance. Here's one of his >>>>>>>>> signature blocks from 1996: >>>>>>>>> >>>>>>>>> Yow! 11.26 MB/s remote host TCP bandwidth & //// >>>>>>>>> 199 usec remote TCP latency over 100Mb/s //// >>>>>>>>> ethernet. Beat that! //// >>>>>>>>> -----------------------------------------////__________ o >>>>>>>>> David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< >>>>>>>>> >>>>>>>>> My guess is that the ISA cards of the day had never even *seen* back to >>>>>>>>> back Ethernet packets (with only the 9.6 uSec interframe spacing between >>>>>>>>> them), so of course they weren't tested to be able to handle them. The >>>>>>>>> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so >>>>>>>>> most cards just had one or two packet buffers. And if the CPU didn't >>>>>>>>> immediately grab one of those received buffers, then the next packet >>>>>>>>> would get dropped for lack of a buffer to put it in. In sending, you >>>>>>>>> had to have the second buffer queued long before the inter-packet gap, or >>>>>>>>> you wouldn't send with minimum packet spacing on the wire. Most PC >>>>>>>>> operating systems couldn't do that. And if your card was slower than >>>>>>>>> the standard 9.6usec inter-packet gap after sensing carrier, >>>>>>>>> then any Sun waiting to transmit would beat your card to the wire, >>>>>>>>> deferring your card's transmission. >>>>>>>>> >>>>>>>>> You may have also been seeing the "Channel capture effect"; see: >>>>>>>>> >>>>>>>>> https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect >>>>>>>>> >>>>>>>>> John >>>>>>>>> >>>>>>> >>>>>> _______ >>>>>> internet-history mailing list >>>>>> internet-history at postel.org >>>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>>> Contact list-owner at postel.org for assistance. >>>>> >>>>> ? >>>>> Richard Bennett >>>>> High Tech Forum Founder >>>>> Ethernet & Wi-Fi standards co-creator >>>>> >>>>> Internet Policy Consultant >>>>> >>>>> _______ >>>>> internet-history mailing list >>>>> internet-history at postel.org >>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>> Contact list-owner at postel.org for assistance. >>>> >>>> >>>> _______ >>>> internet-history mailing list >>>> internet-history at postel.org >>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>> Contact list-owner at postel.org for assistance. >>> >>> ? >>> Richard Bennett >>> High Tech Forum Founder >>> Ethernet & Wi-Fi standards co-creator >>> >>> Internet Policy Consultant >>> >> > > ? > Richard Bennett > High Tech Forum Founder > Ethernet & Wi-Fi standards co-creator > > Internet Policy Consultant > From richard at bennett.com Sun Mar 31 19:31:09 2019 From: richard at bennett.com (Richard Bennett) Date: Sun, 31 Mar 2019 20:31:09 -0600 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <684612DD-B93E-41E3-97F7-3204ADA491BC@sobco.com> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> <684612DD-B93E-41E3-97F7-3204ADA491BC@sobco.com> Message-ID: <43718107-9E22-4F1B-B8E6-6E5172DF8644@bennett.com> One of the funniest bits of history about Ethernet is an interview where Bob Metcalfe said he and Boggs designed around passive cable because they felt a hub or switch would be a bottleneck. Given that the switch is an electronic device that moves bits between other electronic devices this never made much sense. Can a NIC generate traffic faster than than a switch can relay it? And as we see with switching fabrics, the wire has been the bottleneck all along. > On Mar 31, 2019, at 8:02 PM, Scott O. Bradner wrote: > > sure - after twisted pair was standardized > > >> On Mar 31, 2019, at 7:13 PM, Richard Bennett wrote: >> >> At its peak, 3Com manufactured millions of NICs per week, 99.99% for twisted pair. >> >> RB >> >>> On Mar 31, 2019, at 4:56 PM, Scott O. Bradner wrote: >>> >>> hmm - we had a few hundred at Harvard at the peak so I find it hard to think that there were only a thousand world wide >>> >>> Scott >>> >>>> On Mar 31, 2019, at 6:43 PM, Richard Bennett wrote: >>>> >>>> Ethernet is a catchy name, I?ll give you that. >>>> >>>> 3Com quickly discovered that golden rod cable was a huge mistake and replaced it with thin coax, integrated transceivers, and BNC connectors. In reality, the number of golden rod installations in the whole world never numbered much more than thousand; that was good because there was no way to upgrade them to higher speeds. >>>> >>>>> On Mar 31, 2019, at 4:23 PM, Scott O. Bradner wrote: >>>>> >>>>> might have been crap but there sure was a lot of it, and it worked well enough to dominate the LAN space over >>>>> token ring >>>>> >>>>> 10BaseT (and the multiple pre-standard twisted pair Ethernet systems) expanded the coverage hugely but >>>>> a lot of the original yellow cable was deployed >>>>> >>>>> Scott >>>>> >>>>>> On Mar 31, 2019, at 5:10 PM, Richard Bennett wrote: >>>>>> >>>>>> Ethernet was total crap before 10BASE-T for many reasons. The original 3Com card, the 3C501, used a SEEQ 8001 chip that couldn?t handle incoming packets with less than 80ms interframe gap because it had to handle an interrupt and complete a DMA operation for the first packet before it could start the reception of the next one. The 3Com server NIC, the 3C-505, had an embedded CPU and first chip that actually supported the standard, the 82586. The server card added delay when it knew it was talking to 3C501 so as to avoid making it choke. >>>>>> >>>>>> AMD?s first Ethernet chip seeded its random number generator (need for the random exponential binary backoff algorithm) at power up, so a short power fail caused stations to synchronize, making collision resolution impossible. >>>>>> >>>>>> Most packets are generated by servers, so clients close to servers had higher priority than those farther away, an effect of propagation. >>>>>> >>>>>> All in all, coax Ethernet was a horrible design in practice. >>>>>> >>>>>> The first really commendable Ethernet chips were the 3Com 3C509 parallel tasking adapters. They took a linked list of buffer fragments from the driver and delivered packets direct to user space from a smallish FIFO in the chip. These cards also predicted interrupt latency, firing off the reception logic in the driver at a time calculated to have the driver running at or before EOF. This made for some fun code when the driver was ready for a packet that wasn?t quite there yet. >>>>>> >>>>>> The 509 hit the market in 1994, at a time when the tech press liked to run speed/CPU load tests of Ethernet cards and the market for chips was dominated by 3Com and Intel. IIRC, these cards ran at some like 98% of wire speed. >>>>>> >>>>>> The parallel tasking thing allowed 3Com to register some very valuable patents that outlived the company. Patent trolls ended up exploiting them, which expert witnesses to collect some nice fees until they expired. >>>>>> >>>>>> 10BASE-T did away with CSMA-CD in favor of a full duplex transmission mode that buffered packets in the hub, did flow control, and ultimately allowed multiple frames to traverse switches at the same time, kinda like SDMA and beam forming in wireless systems. Hilariously, experts from Ivy League universities working for patent trolls continued to claim Ethernet was a half duplex, CSMA-CD system until the bitter end. >>>>>> >>>>>> RB >>>>>> >>>>>>> On Mar 30, 2019, at 2:33 PM, Jack Haverty wrote: >>>>>>> >>>>>>> The Unix-box problem was the only one I recall. However, we did move >>>>>>> the testing world onto a separate LAN so anything bad that some random >>>>>>> box did wouldn't affect everyone. So it may have happened but we didn't >>>>>>> care. Our mission was to get the software working.... >>>>>>> >>>>>>> /Jack >>>>>>> >>>>>>> On 3/30/19 12:41 AM, Olivier MJ Cr?pin-Leblond wrote: >>>>>>>> Dear Jack, >>>>>>>> >>>>>>>> I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000 >>>>>>>> compatible cards on the same network. >>>>>>>> Having started with Novell & 3COM cards, all on Coax, we found that we >>>>>>>> started getting timeouts when we added more cheap NE2000 compatible cards. >>>>>>>> Did the same thing with oscilloscopes/analysers and tweaked parameters >>>>>>>> to go around this problem. >>>>>>>> Warm regards, >>>>>>>> >>>>>>>> Olivier >>>>>>>> >>>>>>>> On 30/03/2019 02:57, Jack Haverty wrote: >>>>>>>>> I can confirm that there was at least one Unix vendor that violated the >>>>>>>>> Ethernet specs (10mb/s). I was at Oracle in the early 90s, where we had >>>>>>>>> at least one of every common computer so that we could test software. >>>>>>>>> >>>>>>>>> While testing, we noticed that when one particular type of machine was >>>>>>>>> active doing a long bulk transfer, all of the other traffic on our LAN >>>>>>>>> slowed to a crawl. I was a hardware guy in a software universe, but I >>>>>>>>> managed to find one other hardware type, and we scrounged up an >>>>>>>>> oscilloscope, and then looked closely at the wire and at the spec. >>>>>>>>> >>>>>>>>> I don't remember the details, but there was some timer that was supposed >>>>>>>>> to have a certain minimum value and that Unix box was consistently >>>>>>>>> violating it. So it could effectively seize the LAN for as long as it >>>>>>>>> had traffic. >>>>>>>>> >>>>>>>>> Sorry, I can't remember which vendor it was. It might have been Sun, or >>>>>>>>> maybe one specific model/vintage, since we had a lot of Sun equipment >>>>>>>>> but hadn't noticed the problem before. >>>>>>>>> >>>>>>>>> I suspect there's a lot of such "standards" that are routinely violated >>>>>>>>> in the network. Putting it on paper and declaring it mandatory doesn't >>>>>>>>> make it true. Personally I never saw much rigorous certification >>>>>>>>> testing or enforcement (not just of Ethernet), and the general >>>>>>>>> "robustness" designs can hide bad behavior. >>>>>>>>> >>>>>>>>> /Jack Haverty >>>>>>>>> >>>>>>>>> >>>>>>>>> On 3/29/19 5:40 PM, John Gilmore wrote: >>>>>>>>>> Karl Auerbach wrote: >>>>>>>>>>> I recently had someone confirm a widely held belief that Sun >>>>>>>>>>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces >>>>>>>>>>> to have a winning bias against Ethernet machines that adhered to the >>>>>>>>>>> IEEE/DIX ethernet timer values. Those of us who tended to work with >>>>>>>>>>> networked PC platforms were well aware of the effect of putting a Sun >>>>>>>>>>> onto the same Ethernet: what had worked before stopped working, but >>>>>>>>>>> the Suns all chatted among themselves quite happily. >>>>>>>>>> Are we talking about 10 Mbit Ethernet, or something later? >>>>>>>>>> >>>>>>>>>> I worked at Sun back then. Sun was shipping products with Ethernet >>>>>>>>>> before the IBM PC even existed. Sun products used standard Ethernet >>>>>>>>>> chips. Some of those chips were super customizable via internal >>>>>>>>>> registers (I have a T1 card that uses an Ethernet chip with settings >>>>>>>>>> that let it talk telco T1/DS1 protocol!), but Sun always set them to >>>>>>>>>> meet the standard specs. What evidence is there of any non-standard >>>>>>>>>> settings? >>>>>>>>>> >>>>>>>>>> What Sun did differently was that we tuned the implementation so it >>>>>>>>>> could actually send and receive back-to-back packets, at the minimum >>>>>>>>>> specified inter-packet gaps. By building both the hardware and the >>>>>>>>>> software ourselves (like Apple today, and unlike Microsoft), we were >>>>>>>>>> able to work out all the kinks to maximize performance. We could >>>>>>>>>> improve everything: software drivers, interrupt latencies, TCP/IP >>>>>>>>>> stacks, DMA bus arbitration overhead. Sun was the first to do >>>>>>>>>> production shared disk-drive access over Ethernet, to reduce the cost of >>>>>>>>>> our "diskless" workstations. In sending 4Kbyte filesystem blocks among >>>>>>>>>> client and server, we sent an IP-fragmented 4K+ UDP datagram in three >>>>>>>>>> BACK-TO-BACK Ethernet packets. >>>>>>>>>> >>>>>>>>>> Someone, I think it was Van Jacobson, did some early work on maximizing >>>>>>>>>> Ethernet thruput, and reported it at USENIX conferences. His >>>>>>>>>> observation was that to get maximal thruput, you needed 3 things to be >>>>>>>>>> happening absolutely simultaneously: the sender processing & queueing >>>>>>>>>> the next packet; the Ethernet wire moving the current packet; the >>>>>>>>>> receiver dequeueing and processing the previous packet. If any of these >>>>>>>>>> operations took longer than the others, then that would be the limiting >>>>>>>>>> factor in the thruput. This applies to half duplex operation (only one >>>>>>>>>> side transmits at a time); the end node processing requirement doubles >>>>>>>>>> if you run full duplex data in both directions (on more modern Ethernets >>>>>>>>>> that support that) . >>>>>>>>>> >>>>>>>>>> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his >>>>>>>>>> favorite things to work on was network performance. Here's one of his >>>>>>>>>> signature blocks from 1996: >>>>>>>>>> >>>>>>>>>> Yow! 11.26 MB/s remote host TCP bandwidth & //// >>>>>>>>>> 199 usec remote TCP latency over 100Mb/s //// >>>>>>>>>> ethernet. Beat that! //// >>>>>>>>>> -----------------------------------------////__________ o >>>>>>>>>> David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< >>>>>>>>>> >>>>>>>>>> My guess is that the ISA cards of the day had never even *seen* back to >>>>>>>>>> back Ethernet packets (with only the 9.6 uSec interframe spacing between >>>>>>>>>> them), so of course they weren't tested to be able to handle them. The >>>>>>>>>> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so >>>>>>>>>> most cards just had one or two packet buffers. And if the CPU didn't >>>>>>>>>> immediately grab one of those received buffers, then the next packet >>>>>>>>>> would get dropped for lack of a buffer to put it in. In sending, you >>>>>>>>>> had to have the second buffer queued long before the inter-packet gap, or >>>>>>>>>> you wouldn't send with minimum packet spacing on the wire. Most PC >>>>>>>>>> operating systems couldn't do that. And if your card was slower than >>>>>>>>>> the standard 9.6usec inter-packet gap after sensing carrier, >>>>>>>>>> then any Sun waiting to transmit would beat your card to the wire, >>>>>>>>>> deferring your card's transmission. >>>>>>>>>> >>>>>>>>>> You may have also been seeing the "Channel capture effect"; see: >>>>>>>>>> >>>>>>>>>> https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect >>>>>>>>>> >>>>>>>>>> John >>>>>>>>>> >>>>>>>> >>>>>>> _______ >>>>>>> internet-history mailing list >>>>>>> internet-history at postel.org >>>>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>>>> Contact list-owner at postel.org for assistance. >>>>>> >>>>>> ? >>>>>> Richard Bennett >>>>>> High Tech Forum Founder >>>>>> Ethernet & Wi-Fi standards co-creator >>>>>> >>>>>> Internet Policy Consultant >>>>>> >>>>>> _______ >>>>>> internet-history mailing list >>>>>> internet-history at postel.org >>>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>>> Contact list-owner at postel.org for assistance. >>>>> >>>>> >>>>> _______ >>>>> internet-history mailing list >>>>> internet-history at postel.org >>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>> Contact list-owner at postel.org for assistance. >>>> >>>> ? >>>> Richard Bennett >>>> High Tech Forum Founder >>>> Ethernet & Wi-Fi standards co-creator >>>> >>>> Internet Policy Consultant >>>> >>> >> >> ? >> Richard Bennett >> High Tech Forum Founder >> Ethernet & Wi-Fi standards co-creator >> >> Internet Policy Consultant >> > ? Richard Bennett High Tech Forum Founder Ethernet & Wi-Fi standards co-creator Internet Policy Consultant -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at bennett.com Sun Mar 31 19:37:36 2019 From: richard at bennett.com (Richard Bennett) Date: Sun, 31 Mar 2019 20:37:36 -0600 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <41917325-958c-3289-feb6-74d4aa1bd025@gmail.com> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> <41917325-958c-3289-feb6-74d4aa1bd025@gmail.com> Message-ID: <5864E92C-FBF9-4B9D-8F26-3AC4DFFAE662@bennett.com> 3Com?s office on Kifer Rd was wired in a hub-and-spoke configuration for thin coax, with each user?s cubicle wired direct to a multi-port repeater. This made it difficult for user to crash the department?s network by unplugging one of their BNC connectors. Bus was always a mistake, and not just because offices are designed for hub-and-spoke power and phone wires. But wireless is better now that the speeds are up, so Aloha had it right all along. > On Mar 31, 2019, at 6:25 PM, Brian E Carpenter wrote: > > I don't know the numbers but we had quite a lot at CERN, and several > technicians who became experts at adding a tap. I think Boeing had > a lot too, and Microsoft. > > I do agree that CheaperNet made scaling up a lot easier and made > everything more user-proof. Although we did once have a user (i.e. > a physicist) who discovered ungrounded screens on a bunch of CheaperNet > coax cables, and soldered them all to ground. Of course that created > numerous ground loops and broke everything in his area, since the coax > screen should only be grounded at one end. (There were some areas of > CERN where you could measure ground currents of 30 or 40 amps AC, due > to some very, very big electromagnets that inevitably unbalanced > the 3-phase system.) > > That particular user later became head of CERN's IT Division. > > Regards > Brian Carpenter > > On 01-Apr-19 11:56, Scott O. Bradner wrote: >> hmm - we had a few hundred at Harvard at the peak so I find it hard to think that there were only a thousand world wide >> >> Scott >> >>> On Mar 31, 2019, at 6:43 PM, Richard Bennett wrote: >>> >>> Ethernet is a catchy name, I?ll give you that. >>> >>> 3Com quickly discovered that golden rod cable was a huge mistake and replaced it with thin coax, integrated transceivers, and BNC connectors. In reality, the number of golden rod installations in the whole world never numbered much more than thousand; that was good because there was no way to upgrade them to higher speeds. >>> >>>> On Mar 31, 2019, at 4:23 PM, Scott O. Bradner wrote: >>>> >>>> might have been crap but there sure was a lot of it, and it worked well enough to dominate the LAN space over >>>> token ring >>>> >>>> 10BaseT (and the multiple pre-standard twisted pair Ethernet systems) expanded the coverage hugely but >>>> a lot of the original yellow cable was deployed >>>> >>>> Scott >>>> >>>>> On Mar 31, 2019, at 5:10 PM, Richard Bennett wrote: >>>>> >>>>> Ethernet was total crap before 10BASE-T for many reasons. The original 3Com card, the 3C501, used a SEEQ 8001 chip that couldn?t handle incoming packets with less than 80ms interframe gap because it had to handle an interrupt and complete a DMA operation for the first packet before it could start the reception of the next one. The 3Com server NIC, the 3C-505, had an embedded CPU and first chip that actually supported the standard, the 82586. The server card added delay when it knew it was talking to 3C501 so as to avoid making it choke. >>>>> >>>>> AMD?s first Ethernet chip seeded its random number generator (need for the random exponential binary backoff algorithm) at power up, so a short power fail caused stations to synchronize, making collision resolution impossible. >>>>> >>>>> Most packets are generated by servers, so clients close to servers had higher priority than those farther away, an effect of propagation. >>>>> >>>>> All in all, coax Ethernet was a horrible design in practice. >>>>> >>>>> The first really commendable Ethernet chips were the 3Com 3C509 parallel tasking adapters. They took a linked list of buffer fragments from the driver and delivered packets direct to user space from a smallish FIFO in the chip. These cards also predicted interrupt latency, firing off the reception logic in the driver at a time calculated to have the driver running at or before EOF. This made for some fun code when the driver was ready for a packet that wasn?t quite there yet. >>>>> >>>>> The 509 hit the market in 1994, at a time when the tech press liked to run speed/CPU load tests of Ethernet cards and the market for chips was dominated by 3Com and Intel. IIRC, these cards ran at some like 98% of wire speed. >>>>> >>>>> The parallel tasking thing allowed 3Com to register some very valuable patents that outlived the company. Patent trolls ended up exploiting them, which expert witnesses to collect some nice fees until they expired. >>>>> >>>>> 10BASE-T did away with CSMA-CD in favor of a full duplex transmission mode that buffered packets in the hub, did flow control, and ultimately allowed multiple frames to traverse switches at the same time, kinda like SDMA and beam forming in wireless systems. Hilariously, experts from Ivy League universities working for patent trolls continued to claim Ethernet was a half duplex, CSMA-CD system until the bitter end. >>>>> >>>>> RB >>>>> >>>>>> On Mar 30, 2019, at 2:33 PM, Jack Haverty wrote: >>>>>> >>>>>> The Unix-box problem was the only one I recall. However, we did move >>>>>> the testing world onto a separate LAN so anything bad that some random >>>>>> box did wouldn't affect everyone. So it may have happened but we didn't >>>>>> care. Our mission was to get the software working.... >>>>>> >>>>>> /Jack >>>>>> >>>>>> On 3/30/19 12:41 AM, Olivier MJ Cr?pin-Leblond wrote: >>>>>>> Dear Jack, >>>>>>> >>>>>>> I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000 >>>>>>> compatible cards on the same network. >>>>>>> Having started with Novell & 3COM cards, all on Coax, we found that we >>>>>>> started getting timeouts when we added more cheap NE2000 compatible cards. >>>>>>> Did the same thing with oscilloscopes/analysers and tweaked parameters >>>>>>> to go around this problem. >>>>>>> Warm regards, >>>>>>> >>>>>>> Olivier >>>>>>> >>>>>>> On 30/03/2019 02:57, Jack Haverty wrote: >>>>>>>> I can confirm that there was at least one Unix vendor that violated the >>>>>>>> Ethernet specs (10mb/s). I was at Oracle in the early 90s, where we had >>>>>>>> at least one of every common computer so that we could test software. >>>>>>>> >>>>>>>> While testing, we noticed that when one particular type of machine was >>>>>>>> active doing a long bulk transfer, all of the other traffic on our LAN >>>>>>>> slowed to a crawl. I was a hardware guy in a software universe, but I >>>>>>>> managed to find one other hardware type, and we scrounged up an >>>>>>>> oscilloscope, and then looked closely at the wire and at the spec. >>>>>>>> >>>>>>>> I don't remember the details, but there was some timer that was supposed >>>>>>>> to have a certain minimum value and that Unix box was consistently >>>>>>>> violating it. So it could effectively seize the LAN for as long as it >>>>>>>> had traffic. >>>>>>>> >>>>>>>> Sorry, I can't remember which vendor it was. It might have been Sun, or >>>>>>>> maybe one specific model/vintage, since we had a lot of Sun equipment >>>>>>>> but hadn't noticed the problem before. >>>>>>>> >>>>>>>> I suspect there's a lot of such "standards" that are routinely violated >>>>>>>> in the network. Putting it on paper and declaring it mandatory doesn't >>>>>>>> make it true. Personally I never saw much rigorous certification >>>>>>>> testing or enforcement (not just of Ethernet), and the general >>>>>>>> "robustness" designs can hide bad behavior. >>>>>>>> >>>>>>>> /Jack Haverty >>>>>>>> >>>>>>>> >>>>>>>> On 3/29/19 5:40 PM, John Gilmore wrote: >>>>>>>>> Karl Auerbach wrote: >>>>>>>>>> I recently had someone confirm a widely held belief that Sun >>>>>>>>>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces >>>>>>>>>> to have a winning bias against Ethernet machines that adhered to the >>>>>>>>>> IEEE/DIX ethernet timer values. Those of us who tended to work with >>>>>>>>>> networked PC platforms were well aware of the effect of putting a Sun >>>>>>>>>> onto the same Ethernet: what had worked before stopped working, but >>>>>>>>>> the Suns all chatted among themselves quite happily. >>>>>>>>> Are we talking about 10 Mbit Ethernet, or something later? >>>>>>>>> >>>>>>>>> I worked at Sun back then. Sun was shipping products with Ethernet >>>>>>>>> before the IBM PC even existed. Sun products used standard Ethernet >>>>>>>>> chips. Some of those chips were super customizable via internal >>>>>>>>> registers (I have a T1 card that uses an Ethernet chip with settings >>>>>>>>> that let it talk telco T1/DS1 protocol!), but Sun always set them to >>>>>>>>> meet the standard specs. What evidence is there of any non-standard >>>>>>>>> settings? >>>>>>>>> >>>>>>>>> What Sun did differently was that we tuned the implementation so it >>>>>>>>> could actually send and receive back-to-back packets, at the minimum >>>>>>>>> specified inter-packet gaps. By building both the hardware and the >>>>>>>>> software ourselves (like Apple today, and unlike Microsoft), we were >>>>>>>>> able to work out all the kinks to maximize performance. We could >>>>>>>>> improve everything: software drivers, interrupt latencies, TCP/IP >>>>>>>>> stacks, DMA bus arbitration overhead. Sun was the first to do >>>>>>>>> production shared disk-drive access over Ethernet, to reduce the cost of >>>>>>>>> our "diskless" workstations. In sending 4Kbyte filesystem blocks among >>>>>>>>> client and server, we sent an IP-fragmented 4K+ UDP datagram in three >>>>>>>>> BACK-TO-BACK Ethernet packets. >>>>>>>>> >>>>>>>>> Someone, I think it was Van Jacobson, did some early work on maximizing >>>>>>>>> Ethernet thruput, and reported it at USENIX conferences. His >>>>>>>>> observation was that to get maximal thruput, you needed 3 things to be >>>>>>>>> happening absolutely simultaneously: the sender processing & queueing >>>>>>>>> the next packet; the Ethernet wire moving the current packet; the >>>>>>>>> receiver dequeueing and processing the previous packet. If any of these >>>>>>>>> operations took longer than the others, then that would be the limiting >>>>>>>>> factor in the thruput. This applies to half duplex operation (only one >>>>>>>>> side transmits at a time); the end node processing requirement doubles >>>>>>>>> if you run full duplex data in both directions (on more modern Ethernets >>>>>>>>> that support that) . >>>>>>>>> >>>>>>>>> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his >>>>>>>>> favorite things to work on was network performance. Here's one of his >>>>>>>>> signature blocks from 1996: >>>>>>>>> >>>>>>>>> Yow! 11.26 MB/s remote host TCP bandwidth & //// >>>>>>>>> 199 usec remote TCP latency over 100Mb/s //// >>>>>>>>> ethernet. Beat that! //// >>>>>>>>> -----------------------------------------////__________ o >>>>>>>>> David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< >>>>>>>>> >>>>>>>>> My guess is that the ISA cards of the day had never even *seen* back to >>>>>>>>> back Ethernet packets (with only the 9.6 uSec interframe spacing between >>>>>>>>> them), so of course they weren't tested to be able to handle them. The >>>>>>>>> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so >>>>>>>>> most cards just had one or two packet buffers. And if the CPU didn't >>>>>>>>> immediately grab one of those received buffers, then the next packet >>>>>>>>> would get dropped for lack of a buffer to put it in. In sending, you >>>>>>>>> had to have the second buffer queued long before the inter-packet gap, or >>>>>>>>> you wouldn't send with minimum packet spacing on the wire. Most PC >>>>>>>>> operating systems couldn't do that. And if your card was slower than >>>>>>>>> the standard 9.6usec inter-packet gap after sensing carrier, >>>>>>>>> then any Sun waiting to transmit would beat your card to the wire, >>>>>>>>> deferring your card's transmission. >>>>>>>>> >>>>>>>>> You may have also been seeing the "Channel capture effect"; see: >>>>>>>>> >>>>>>>>> https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect >>>>>>>>> >>>>>>>>> John >>>>>>>>> >>>>>>> >>>>>> _______ >>>>>> internet-history mailing list >>>>>> internet-history at postel.org >>>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>>> Contact list-owner at postel.org for assistance. >>>>> >>>>> ? >>>>> Richard Bennett >>>>> High Tech Forum Founder >>>>> Ethernet & Wi-Fi standards co-creator >>>>> >>>>> Internet Policy Consultant >>>>> >>>>> _______ >>>>> internet-history mailing list >>>>> internet-history at postel.org >>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>> Contact list-owner at postel.org for assistance. >>>> >>>> >>>> _______ >>>> internet-history mailing list >>>> internet-history at postel.org >>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>> Contact list-owner at postel.org for assistance. >>> >>> ? >>> Richard Bennett >>> High Tech Forum Founder >>> Ethernet & Wi-Fi standards co-creator >>> >>> Internet Policy Consultant >>> >> >> >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. >> > ? Richard Bennett High Tech Forum Founder Ethernet & Wi-Fi standards co-creator Internet Policy Consultant -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.e.carpenter at gmail.com Sun Mar 31 20:08:37 2019 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 1 Apr 2019 16:08:37 +1300 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <5864E92C-FBF9-4B9D-8F26-3AC4DFFAE662@bennett.com> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> <41917325-958c-3289-feb6-74d4aa1bd025@gmail.com> <5864E92C-FBF9-4B9D-8F26-3AC4DFFAE662@bennett.com> Message-ID: <4f4ce764-d344-f037-fb43-d6a01713e3e9@gmail.com> Yes, but money was an issue and daisy-chained coax really was Cheaper. Money ceased to be an issue when 100% of the CERN physics community (a) insisted on a network connection at every desk, and (b) had experienced outages due to some clown breaking the Cheapernet daisy chain. When those conditions were met (about the end of 1995, I think), I went to management and got the budget to recable everywhere with UTP-5. The biggest and easiest budget request I ever made. Regards Brian On 01-Apr-19 15:37, Richard Bennett wrote: > 3Com?s office on Kifer Rd was wired in a hub-and-spoke configuration for thin coax, with each user?s cubicle wired direct to a multi-port repeater. This made it difficult for user to crash the department?s network by unplugging one of their BNC connectors. ?Bus was always a mistake, and not just because offices are designed for hub-and-spoke power and phone wires. But wireless is better now that the speeds are up, so Aloha had it right all along. ? > >> On Mar 31, 2019, at 6:25 PM, Brian E Carpenter > wrote: >> >> I don't know the numbers but we had quite a lot at CERN, and several >> technicians who became experts at adding a tap. I think Boeing had >> a lot too, and Microsoft. >> >> I do agree that CheaperNet made scaling up a lot easier and made >> everything more user-proof. Although we did once have a user (i.e. >> a physicist) who discovered ungrounded screens on a bunch of CheaperNet >> coax cables, and soldered them all to ground. Of course that created >> numerous ground loops and broke everything in his area, since the coax >> screen should only be grounded at one end. (There were some areas of >> CERN where you could measure ground currents of 30 or 40 amps AC, due >> to some very, very big electromagnets that inevitably unbalanced >> the 3-phase system.) >> >> That particular user later became head of CERN's IT Division. >> >> Regards >> ??Brian Carpenter >> >> On 01-Apr-19 11:56, Scott O. Bradner wrote: >>> hmm - we had a few hundred at Harvard at the peak so I find it hard to think that there were only a thousand world wide >>> >>> Scott >>> >>>> On Mar 31, 2019, at 6:43 PM, Richard Bennett > wrote: >>>> >>>> Ethernet is a catchy name, I?ll give you that. >>>> >>>> 3Com quickly discovered that golden rod cable was a huge mistake and replaced it with thin coax, integrated transceivers, and BNC connectors. In reality, the number of golden rod installations in the whole world never numbered much more than thousand; that was good because there was no way to upgrade them to higher speeds. >>>> >>>>> On Mar 31, 2019, at 4:23 PM, Scott O. Bradner > wrote: >>>>> >>>>> might have been crap but there sure was a lot of it, and it worked well enough to dominate the LAN space over >>>>> token ring >>>>> >>>>> 10BaseT (and the multiple pre-standard twisted pair Ethernet systems) expanded the coverage hugely but >>>>> a lot of the original yellow cable was deployed >>>>> >>>>> Scott >>>>> >>>>>> On Mar 31, 2019, at 5:10 PM, Richard Bennett > wrote: >>>>>> >>>>>> Ethernet was total crap before 10BASE-T for many reasons. The original 3Com card, the 3C501, used a SEEQ 8001 chip that couldn?t handle incoming packets with less than 80ms interframe gap because it had to handle an interrupt and complete a DMA operation for the first packet before it could start the reception of the next one. The 3Com server NIC, the 3C-505, had an embedded CPU and first chip that actually supported the standard, the 82586. The server card added delay when it knew it was talking to 3C501 so as to avoid making it choke. >>>>>> >>>>>> AMD?s first Ethernet chip seeded its random number generator (need for the random exponential binary backoff algorithm) at power up, so a short power fail caused stations to synchronize, making collision resolution impossible. >>>>>> >>>>>> Most packets are generated by servers, so clients close to servers had higher priority than those farther away, an effect of propagation. >>>>>> >>>>>> All in all, coax Ethernet was a horrible design in practice. >>>>>> >>>>>> The first really commendable Ethernet chips were the 3Com 3C509 parallel tasking adapters. They took a linked list of buffer fragments from the driver and delivered packets direct to user space from a smallish FIFO in the chip. These cards also predicted interrupt latency, firing off the reception logic in the driver at a time calculated to have the driver running at or before EOF. This made for some fun code when the driver was ready for a packet that wasn?t quite there yet. >>>>>> >>>>>> The 509 hit the market in 1994, at a time when the tech press liked to run speed/CPU load tests of Ethernet cards and the market for chips was dominated by 3Com and Intel. IIRC, these cards ran at some like 98% of wire speed. >>>>>> >>>>>> The parallel tasking thing allowed 3Com to register some very valuable patents that outlived the company. Patent trolls ended up exploiting them, which expert witnesses to collect some nice fees until they expired. >>>>>> >>>>>> 10BASE-T did away with CSMA-CD in favor of a full duplex transmission mode that buffered packets in the hub, did flow control, and ultimately allowed multiple frames to traverse switches at the same time, kinda like SDMA and beam forming in wireless systems. Hilariously, experts from Ivy League universities working for patent trolls continued to claim Ethernet was a half duplex, CSMA-CD system until the bitter end. >>>>>> >>>>>> RB >>>>>> >>>>>>> On Mar 30, 2019, at 2:33 PM, Jack Haverty > wrote: >>>>>>> >>>>>>> The Unix-box problem was the only one I recall. ?However, we did move >>>>>>> the testing world onto a separate LAN so anything bad that some random >>>>>>> box did wouldn't affect everyone. ?So it may have happened but we didn't >>>>>>> care. ??Our mission was to get the software working.... >>>>>>> >>>>>>> /Jack >>>>>>> >>>>>>> On 3/30/19 12:41 AM, Olivier MJ Cr?pin-Leblond wrote: >>>>>>>> Dear Jack, >>>>>>>> >>>>>>>> I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000 >>>>>>>> compatible cards on the same network. >>>>>>>> Having started with Novell & 3COM cards, all on Coax, we found that we >>>>>>>> started getting timeouts when we added more cheap NE2000 compatible cards. >>>>>>>> Did the same thing with oscilloscopes/analysers and tweaked parameters >>>>>>>> to go around this problem. >>>>>>>> Warm regards, >>>>>>>> >>>>>>>> Olivier >>>>>>>> >>>>>>>> On 30/03/2019 02:57, Jack Haverty wrote: >>>>>>>>> I can confirm that there was at least one Unix vendor that violated the >>>>>>>>> Ethernet specs (10mb/s). ?I was at Oracle in the early 90s, where we had >>>>>>>>> at least one of every common computer so that we could test software. >>>>>>>>> >>>>>>>>> While testing, we noticed that when one particular type of machine was >>>>>>>>> active doing a long bulk transfer, all of the other traffic on our LAN >>>>>>>>> slowed to a crawl. ??I was a hardware guy in a software universe, but I >>>>>>>>> managed to find one other hardware type, and we scrounged up an >>>>>>>>> oscilloscope, and then looked closely at the wire and at the spec. >>>>>>>>> >>>>>>>>> I don't remember the details, but there was some timer that was supposed >>>>>>>>> to have a certain minimum value and that Unix box was consistently >>>>>>>>> violating it. ?So it could effectively seize the LAN for as long as it >>>>>>>>> had traffic. >>>>>>>>> >>>>>>>>> Sorry, I can't remember which vendor it was. ?It might have been Sun, or >>>>>>>>> maybe one specific model/vintage, since we had a lot of Sun equipment >>>>>>>>> but hadn't noticed the problem before. >>>>>>>>> >>>>>>>>> I suspect there's a lot of such "standards" that are routinely violated >>>>>>>>> in the network. ??Putting it on paper and declaring it mandatory doesn't >>>>>>>>> make it true. ?Personally I never saw much rigorous certification >>>>>>>>> testing or enforcement (not just of Ethernet), and the general >>>>>>>>> "robustness" designs can hide bad behavior. >>>>>>>>> >>>>>>>>> /Jack Haverty >>>>>>>>> >>>>>>>>> >>>>>>>>> On 3/29/19 5:40 PM, John Gilmore wrote: >>>>>>>>>> Karl Auerbach > wrote: >>>>>>>>>>> I recently had someone confirm a widely held belief that Sun >>>>>>>>>>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces >>>>>>>>>>> to have a winning bias against Ethernet machines that adhered to the >>>>>>>>>>> IEEE/DIX ethernet timer values. ?Those of us who tended to work with >>>>>>>>>>> networked PC platforms were well aware of the effect of putting a Sun >>>>>>>>>>> onto the same Ethernet: what had worked before stopped working, but >>>>>>>>>>> the Suns all chatted among themselves quite happily. >>>>>>>>>> Are we talking about 10 Mbit Ethernet, or something later? >>>>>>>>>> >>>>>>>>>> I worked at Sun back then. ?Sun was shipping products with Ethernet >>>>>>>>>> before the IBM PC even existed. ?Sun products used standard Ethernet >>>>>>>>>> chips. ?Some of those chips were super customizable via internal >>>>>>>>>> registers (I have a T1 card that uses an Ethernet chip with settings >>>>>>>>>> that let it talk telco T1/DS1 protocol!), but Sun always set them to >>>>>>>>>> meet the standard specs. ?What evidence is there of any non-standard >>>>>>>>>> settings? >>>>>>>>>> >>>>>>>>>> What Sun did differently was that we tuned the implementation so it >>>>>>>>>> could actually send and receive back-to-back packets, at the minimum >>>>>>>>>> specified inter-packet gaps. ?By building both the hardware and the >>>>>>>>>> software ourselves (like Apple today, and unlike Microsoft), we were >>>>>>>>>> able to work out all the kinks to maximize performance. ?We could >>>>>>>>>> improve everything: software drivers, interrupt latencies, TCP/IP >>>>>>>>>> stacks, DMA bus arbitration overhead. ?Sun was the first to do >>>>>>>>>> production shared disk-drive access over Ethernet, to reduce the cost of >>>>>>>>>> our "diskless" workstations. ?In sending 4Kbyte filesystem blocks among >>>>>>>>>> client and server, we sent an IP-fragmented 4K+ UDP datagram in three >>>>>>>>>> BACK-TO-BACK Ethernet packets. >>>>>>>>>> >>>>>>>>>> Someone, I think it was Van Jacobson, did some early work on maximizing >>>>>>>>>> Ethernet thruput, and reported it at USENIX conferences. ?His >>>>>>>>>> observation was that to get maximal thruput, you needed 3 things to be >>>>>>>>>> happening absolutely simultaneously: the sender processing & queueing >>>>>>>>>> the next packet; the Ethernet wire moving the current packet; the >>>>>>>>>> receiver dequeueing and processing the previous packet. ?If any of these >>>>>>>>>> operations took longer than the others, then that would be the limiting >>>>>>>>>> factor in the thruput. ?This applies to half duplex operation (only one >>>>>>>>>> side transmits at a time); the end node processing requirement doubles >>>>>>>>>> if you run full duplex data in both directions (on more modern Ethernets >>>>>>>>>> that support that) . >>>>>>>>>> >>>>>>>>>> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his >>>>>>>>>> favorite things to work on was network performance. ?Here's one of his >>>>>>>>>> signature blocks from 1996: >>>>>>>>>> >>>>>>>>>> Yow! 11.26 MB/s remote host TCP bandwidth & //// >>>>>>>>>> 199 usec remote TCP latency over 100Mb/s ??//// >>>>>>>>>> ethernet. ?Beat that! ????????????????????//// >>>>>>>>>> -----------------------------------------////__________ ?o >>>>>>>>>> David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< >>>>>>>>>> >>>>>>>>>> My guess is that the ISA cards of the day had never even *seen* back to >>>>>>>>>> back Ethernet packets (with only the 9.6 uSec interframe spacing between >>>>>>>>>> them), so of course they weren't tested to be able to handle them. ?The >>>>>>>>>> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so >>>>>>>>>> most cards just had one or two packet buffers. ?And if the CPU didn't >>>>>>>>>> immediately grab one of those received buffers, then the next packet >>>>>>>>>> would get dropped for lack of a buffer to put it in. ?In sending, you >>>>>>>>>> had to have the second buffer queued long before the inter-packet gap, or >>>>>>>>>> you wouldn't send with minimum packet spacing on the wire. ?Most PC >>>>>>>>>> operating systems couldn't do that. ?And if your card was slower than >>>>>>>>>> the standard 9.6usec inter-packet gap after sensing carrier, >>>>>>>>>> then any Sun waiting to transmit would beat your card to the wire, >>>>>>>>>> deferring your card's transmission. >>>>>>>>>> >>>>>>>>>> You may have also been seeing the "Channel capture effect"; see: >>>>>>>>>> >>>>>>>>>> https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect >>>>>>>>>> >>>>>>>>>> John >>>>>>>>>> >>>>>>>> >>>>>>> _______ >>>>>>> internet-history mailing list >>>>>>> internet-history at postel.org >>>>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>>>> Contact list-owner at postel.org for assistance. >>>>>> >>>>>> ? >>>>>> Richard Bennett >>>>>> High Tech Forum Founder >>>>>> Ethernet & Wi-Fi standards co-creator >>>>>> >>>>>> Internet Policy Consultant >>>>>> >>>>>> _______ >>>>>> internet-history mailing list >>>>>> internet-history at postel.org >>>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>>> Contact list-owner at postel.org for assistance. >>>>> >>>>> >>>>> _______ >>>>> internet-history mailing list >>>>> internet-history at postel.org >>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>> Contact list-owner at postel.org for assistance. >>>> >>>> ? >>>> Richard Bennett >>>> High Tech Forum Founder >>>> Ethernet & Wi-Fi standards co-creator >>>> >>>> Internet Policy Consultant >>>> >>> >>> >>> _______ >>> internet-history mailing list >>> internet-history at postel.org >>> http://mailman.postel.org/mailman/listinfo/internet-history >>> Contact list-owner at postel.org for assistance. >>> >> > > ? > Richard Bennett > High Tech Forum ?Founder > Ethernet & Wi-Fi standards co-creator > > Internet Policy Consultant > From richard at bennett.com Sun Mar 31 20:13:45 2019 From: richard at bennett.com (Richard Bennett) Date: Sun, 31 Mar 2019 21:13:45 -0600 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <4f4ce764-d344-f037-fb43-d6a01713e3e9@gmail.com> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> <41917325-958c-3289-feb6-74d4aa1bd025@gmail.com> <5864E92C-FBF9-4B9D-8F26-3AC4DFFAE662@bennett.com> <4f4ce764-d344-f037-fb43-d6a01713e3e9@gmail.com> Message-ID: Heh, the hub-and-spoke redesign came from IEEE 802.3 Low-cost LAN task group, of which I was a member. Apart from NICs, the economics of coax Ethernet were dominated by labor, wire, transceivers, and fault isolation, all of which were much cheaper with twisted pair, hub-and-spoke, and RJ-45 connectors. > On Mar 31, 2019, at 9:08 PM, Brian E Carpenter wrote: > > Yes, but money was an issue and daisy-chained coax really was Cheaper. > > Money ceased to be an issue when 100% of the CERN physics community > (a) insisted on a network connection at every desk, and > (b) had experienced outages due to some clown breaking > the Cheapernet daisy chain. > > When those conditions were met (about the end of 1995, I think), > I went to management and got the budget to recable everywhere > with UTP-5. The biggest and easiest budget request I ever made. > > Regards > Brian > > On 01-Apr-19 15:37, Richard Bennett wrote: >> 3Com?s office on Kifer Rd was wired in a hub-and-spoke configuration for thin coax, with each user?s cubicle wired direct to a multi-port repeater. This made it difficult for user to crash the department?s network by unplugging one of their BNC connectors. Bus was always a mistake, and not just because offices are designed for hub-and-spoke power and phone wires. But wireless is better now that the speeds are up, so Aloha had it right all along. >> >>> On Mar 31, 2019, at 6:25 PM, Brian E Carpenter > wrote: >>> >>> I don't know the numbers but we had quite a lot at CERN, and several >>> technicians who became experts at adding a tap. I think Boeing had >>> a lot too, and Microsoft. >>> >>> I do agree that CheaperNet made scaling up a lot easier and made >>> everything more user-proof. Although we did once have a user (i.e. >>> a physicist) who discovered ungrounded screens on a bunch of CheaperNet >>> coax cables, and soldered them all to ground. Of course that created >>> numerous ground loops and broke everything in his area, since the coax >>> screen should only be grounded at one end. (There were some areas of >>> CERN where you could measure ground currents of 30 or 40 amps AC, due >>> to some very, very big electromagnets that inevitably unbalanced >>> the 3-phase system.) >>> >>> That particular user later became head of CERN's IT Division. >>> >>> Regards >>> Brian Carpenter >>> >>> On 01-Apr-19 11:56, Scott O. Bradner wrote: >>>> hmm - we had a few hundred at Harvard at the peak so I find it hard to think that there were only a thousand world wide >>>> >>>> Scott >>>> >>>>> On Mar 31, 2019, at 6:43 PM, Richard Bennett > wrote: >>>>> >>>>> Ethernet is a catchy name, I?ll give you that. >>>>> >>>>> 3Com quickly discovered that golden rod cable was a huge mistake and replaced it with thin coax, integrated transceivers, and BNC connectors. In reality, the number of golden rod installations in the whole world never numbered much more than thousand; that was good because there was no way to upgrade them to higher speeds. >>>>> >>>>>> On Mar 31, 2019, at 4:23 PM, Scott O. Bradner > wrote: >>>>>> >>>>>> might have been crap but there sure was a lot of it, and it worked well enough to dominate the LAN space over >>>>>> token ring >>>>>> >>>>>> 10BaseT (and the multiple pre-standard twisted pair Ethernet systems) expanded the coverage hugely but >>>>>> a lot of the original yellow cable was deployed >>>>>> >>>>>> Scott >>>>>> >>>>>>> On Mar 31, 2019, at 5:10 PM, Richard Bennett > wrote: >>>>>>> >>>>>>> Ethernet was total crap before 10BASE-T for many reasons. The original 3Com card, the 3C501, used a SEEQ 8001 chip that couldn?t handle incoming packets with less than 80ms interframe gap because it had to handle an interrupt and complete a DMA operation for the first packet before it could start the reception of the next one. The 3Com server NIC, the 3C-505, had an embedded CPU and first chip that actually supported the standard, the 82586. The server card added delay when it knew it was talking to 3C501 so as to avoid making it choke. >>>>>>> >>>>>>> AMD?s first Ethernet chip seeded its random number generator (need for the random exponential binary backoff algorithm) at power up, so a short power fail caused stations to synchronize, making collision resolution impossible. >>>>>>> >>>>>>> Most packets are generated by servers, so clients close to servers had higher priority than those farther away, an effect of propagation. >>>>>>> >>>>>>> All in all, coax Ethernet was a horrible design in practice. >>>>>>> >>>>>>> The first really commendable Ethernet chips were the 3Com 3C509 parallel tasking adapters. They took a linked list of buffer fragments from the driver and delivered packets direct to user space from a smallish FIFO in the chip. These cards also predicted interrupt latency, firing off the reception logic in the driver at a time calculated to have the driver running at or before EOF. This made for some fun code when the driver was ready for a packet that wasn?t quite there yet. >>>>>>> >>>>>>> The 509 hit the market in 1994, at a time when the tech press liked to run speed/CPU load tests of Ethernet cards and the market for chips was dominated by 3Com and Intel. IIRC, these cards ran at some like 98% of wire speed. >>>>>>> >>>>>>> The parallel tasking thing allowed 3Com to register some very valuable patents that outlived the company. Patent trolls ended up exploiting them, which expert witnesses to collect some nice fees until they expired. >>>>>>> >>>>>>> 10BASE-T did away with CSMA-CD in favor of a full duplex transmission mode that buffered packets in the hub, did flow control, and ultimately allowed multiple frames to traverse switches at the same time, kinda like SDMA and beam forming in wireless systems. Hilariously, experts from Ivy League universities working for patent trolls continued to claim Ethernet was a half duplex, CSMA-CD system until the bitter end. >>>>>>> >>>>>>> RB >>>>>>> >>>>>>>> On Mar 30, 2019, at 2:33 PM, Jack Haverty > wrote: >>>>>>>> >>>>>>>> The Unix-box problem was the only one I recall. However, we did move >>>>>>>> the testing world onto a separate LAN so anything bad that some random >>>>>>>> box did wouldn't affect everyone. So it may have happened but we didn't >>>>>>>> care. Our mission was to get the software working.... >>>>>>>> >>>>>>>> /Jack >>>>>>>> >>>>>>>> On 3/30/19 12:41 AM, Olivier MJ Cr?pin-Leblond wrote: >>>>>>>>> Dear Jack, >>>>>>>>> >>>>>>>>> I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000 >>>>>>>>> compatible cards on the same network. >>>>>>>>> Having started with Novell & 3COM cards, all on Coax, we found that we >>>>>>>>> started getting timeouts when we added more cheap NE2000 compatible cards. >>>>>>>>> Did the same thing with oscilloscopes/analysers and tweaked parameters >>>>>>>>> to go around this problem. >>>>>>>>> Warm regards, >>>>>>>>> >>>>>>>>> Olivier >>>>>>>>> >>>>>>>>> On 30/03/2019 02:57, Jack Haverty wrote: >>>>>>>>>> I can confirm that there was at least one Unix vendor that violated the >>>>>>>>>> Ethernet specs (10mb/s). I was at Oracle in the early 90s, where we had >>>>>>>>>> at least one of every common computer so that we could test software. >>>>>>>>>> >>>>>>>>>> While testing, we noticed that when one particular type of machine was >>>>>>>>>> active doing a long bulk transfer, all of the other traffic on our LAN >>>>>>>>>> slowed to a crawl. I was a hardware guy in a software universe, but I >>>>>>>>>> managed to find one other hardware type, and we scrounged up an >>>>>>>>>> oscilloscope, and then looked closely at the wire and at the spec. >>>>>>>>>> >>>>>>>>>> I don't remember the details, but there was some timer that was supposed >>>>>>>>>> to have a certain minimum value and that Unix box was consistently >>>>>>>>>> violating it. So it could effectively seize the LAN for as long as it >>>>>>>>>> had traffic. >>>>>>>>>> >>>>>>>>>> Sorry, I can't remember which vendor it was. It might have been Sun, or >>>>>>>>>> maybe one specific model/vintage, since we had a lot of Sun equipment >>>>>>>>>> but hadn't noticed the problem before. >>>>>>>>>> >>>>>>>>>> I suspect there's a lot of such "standards" that are routinely violated >>>>>>>>>> in the network. Putting it on paper and declaring it mandatory doesn't >>>>>>>>>> make it true. Personally I never saw much rigorous certification >>>>>>>>>> testing or enforcement (not just of Ethernet), and the general >>>>>>>>>> "robustness" designs can hide bad behavior. >>>>>>>>>> >>>>>>>>>> /Jack Haverty >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On 3/29/19 5:40 PM, John Gilmore wrote: >>>>>>>>>>> Karl Auerbach > wrote: >>>>>>>>>>>> I recently had someone confirm a widely held belief that Sun >>>>>>>>>>>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces >>>>>>>>>>>> to have a winning bias against Ethernet machines that adhered to the >>>>>>>>>>>> IEEE/DIX ethernet timer values. Those of us who tended to work with >>>>>>>>>>>> networked PC platforms were well aware of the effect of putting a Sun >>>>>>>>>>>> onto the same Ethernet: what had worked before stopped working, but >>>>>>>>>>>> the Suns all chatted among themselves quite happily. >>>>>>>>>>> Are we talking about 10 Mbit Ethernet, or something later? >>>>>>>>>>> >>>>>>>>>>> I worked at Sun back then. Sun was shipping products with Ethernet >>>>>>>>>>> before the IBM PC even existed. Sun products used standard Ethernet >>>>>>>>>>> chips. Some of those chips were super customizable via internal >>>>>>>>>>> registers (I have a T1 card that uses an Ethernet chip with settings >>>>>>>>>>> that let it talk telco T1/DS1 protocol!), but Sun always set them to >>>>>>>>>>> meet the standard specs. What evidence is there of any non-standard >>>>>>>>>>> settings? >>>>>>>>>>> >>>>>>>>>>> What Sun did differently was that we tuned the implementation so it >>>>>>>>>>> could actually send and receive back-to-back packets, at the minimum >>>>>>>>>>> specified inter-packet gaps. By building both the hardware and the >>>>>>>>>>> software ourselves (like Apple today, and unlike Microsoft), we were >>>>>>>>>>> able to work out all the kinks to maximize performance. We could >>>>>>>>>>> improve everything: software drivers, interrupt latencies, TCP/IP >>>>>>>>>>> stacks, DMA bus arbitration overhead. Sun was the first to do >>>>>>>>>>> production shared disk-drive access over Ethernet, to reduce the cost of >>>>>>>>>>> our "diskless" workstations. In sending 4Kbyte filesystem blocks among >>>>>>>>>>> client and server, we sent an IP-fragmented 4K+ UDP datagram in three >>>>>>>>>>> BACK-TO-BACK Ethernet packets. >>>>>>>>>>> >>>>>>>>>>> Someone, I think it was Van Jacobson, did some early work on maximizing >>>>>>>>>>> Ethernet thruput, and reported it at USENIX conferences. His >>>>>>>>>>> observation was that to get maximal thruput, you needed 3 things to be >>>>>>>>>>> happening absolutely simultaneously: the sender processing & queueing >>>>>>>>>>> the next packet; the Ethernet wire moving the current packet; the >>>>>>>>>>> receiver dequeueing and processing the previous packet. If any of these >>>>>>>>>>> operations took longer than the others, then that would be the limiting >>>>>>>>>>> factor in the thruput. This applies to half duplex operation (only one >>>>>>>>>>> side transmits at a time); the end node processing requirement doubles >>>>>>>>>>> if you run full duplex data in both directions (on more modern Ethernets >>>>>>>>>>> that support that) . >>>>>>>>>>> >>>>>>>>>>> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his >>>>>>>>>>> favorite things to work on was network performance. Here's one of his >>>>>>>>>>> signature blocks from 1996: >>>>>>>>>>> >>>>>>>>>>> Yow! 11.26 MB/s remote host TCP bandwidth & //// >>>>>>>>>>> 199 usec remote TCP latency over 100Mb/s //// >>>>>>>>>>> ethernet. Beat that! //// >>>>>>>>>>> -----------------------------------------////__________ o >>>>>>>>>>> David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< >>>>>>>>>>> >>>>>>>>>>> My guess is that the ISA cards of the day had never even *seen* back to >>>>>>>>>>> back Ethernet packets (with only the 9.6 uSec interframe spacing between >>>>>>>>>>> them), so of course they weren't tested to be able to handle them. The >>>>>>>>>>> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so >>>>>>>>>>> most cards just had one or two packet buffers. And if the CPU didn't >>>>>>>>>>> immediately grab one of those received buffers, then the next packet >>>>>>>>>>> would get dropped for lack of a buffer to put it in. In sending, you >>>>>>>>>>> had to have the second buffer queued long before the inter-packet gap, or >>>>>>>>>>> you wouldn't send with minimum packet spacing on the wire. Most PC >>>>>>>>>>> operating systems couldn't do that. And if your card was slower than >>>>>>>>>>> the standard 9.6usec inter-packet gap after sensing carrier, >>>>>>>>>>> then any Sun waiting to transmit would beat your card to the wire, >>>>>>>>>>> deferring your card's transmission. >>>>>>>>>>> >>>>>>>>>>> You may have also been seeing the "Channel capture effect"; see: >>>>>>>>>>> >>>>>>>>>>> https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect >>>>>>>>>>> >>>>>>>>>>> John >>>>>>>>>>> >>>>>>>>> >>>>>>>> _______ >>>>>>>> internet-history mailing list >>>>>>>> internet-history at postel.org >>>>>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>>>>> Contact list-owner at postel.org for assistance. >>>>>>> >>>>>>> ? >>>>>>> Richard Bennett >>>>>>> High Tech Forum Founder >>>>>>> Ethernet & Wi-Fi standards co-creator >>>>>>> >>>>>>> Internet Policy Consultant >>>>>>> >>>>>>> _______ >>>>>>> internet-history mailing list >>>>>>> internet-history at postel.org >>>>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>>>> Contact list-owner at postel.org for assistance. >>>>>> >>>>>> >>>>>> _______ >>>>>> internet-history mailing list >>>>>> internet-history at postel.org >>>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>>> Contact list-owner at postel.org for assistance. >>>>> >>>>> ? >>>>> Richard Bennett >>>>> High Tech Forum Founder >>>>> Ethernet & Wi-Fi standards co-creator >>>>> >>>>> Internet Policy Consultant >>>>> >>>> >>>> >>>> _______ >>>> internet-history mailing list >>>> internet-history at postel.org >>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>> Contact list-owner at postel.org for assistance. >>>> >>> >> >> ? >> Richard Bennett >> High Tech Forum Founder >> Ethernet & Wi-Fi standards co-creator >> >> Internet Policy Consultant >> > ? Richard Bennett High Tech Forum Founder Ethernet & Wi-Fi standards co-creator Internet Policy Consultant -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhc at dcrocker.net Sun Mar 31 20:17:57 2019 From: dhc at dcrocker.net (Dave Crocker) Date: Sun, 31 Mar 2019 20:17:57 -0700 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> Message-ID: <7366b989-002d-7329-d5ab-265619b15fa4@dcrocker.net> On 3/31/2019 3:23 PM, Scott O. Bradner wrote: > might have been crap but there sure was a lot of it, and it worked well enough to dominate the LAN space over > token ring When I first learned that the Irvine Ring had 1/4 of its real-estate devoted to a contention-based token-creation mechanism, I was pretty sure a contention system was going to win this particular technology debate. It was simply simpler. Pretty much no application required the predictability of access that a token system was theoretically capable of ensuring. BTW, its benefits notwithstanding, thin ether was fragile and a netops pain. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From touch at strayalpha.com Sun Mar 31 20:19:34 2019 From: touch at strayalpha.com (Joe Touch) Date: Sun, 31 Mar 2019 20:19:34 -0700 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <43718107-9E22-4F1B-B8E6-6E5172DF8644@bennett.com> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> <684612DD-B93E-41E3-97F7-3204ADA491BC@sobco.com> <43718107-9E22-4F1B-B8E6-6E5172DF8644@bennett.com> Message-ID: <8CDCB415-0008-4616-BBE1-871EA50261D5@strayalpha.com> On Mar 31, 2019, at 7:31 PM, Richard Bennett wrote: > > One of the funniest bits of history about Ethernet is an interview where Bob Metcalfe said he and Boggs designed around passive cable because they felt a hub or switch would be a bottleneck. Switches and shared media both experience the bottleneck that prevents two sources from talking to one sink at the same time. Shared media manage this bottleneck this via effects that happen at the physical layer. Switches enforce this through buffering and scheduling. Shared media experience other bottlenecks that switches do not. I.e., in some ways, a switch emulates a wire, but it also it allows communication exchanges that a wire cannot. So the reality is opposite their intuition, given ?wire speed? devices... > Given that the switch is an electronic device that moves bits between other electronic devices this never made much sense. Can a NIC generate traffic faster than than a switch can relay it? It?s possible to generate data faster than it can be received, but that wouldn?t be particularly useful. So assuming NICs that do both at the same speeds, it?s impossible to generate data faster than something can switch it - at a minimum, the data could be switched through a multiple stages of NIC/computers (e.g., a Clos network). > And as we see with switching fabrics, the wire has been the bottleneck all along. Well, to paraphrase Doc Brown, "where we?re going, we don?t need wires?? (if we use optics). Joe From brian.e.carpenter at gmail.com Sun Mar 31 20:31:11 2019 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 1 Apr 2019 16:31:11 +1300 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> <41917325-958c-3289-feb6-74d4aa1bd025@gmail.com> <5864E92C-FBF9-4B9D-8F26-3AC4DFFAE662@bennett.com> <4f4ce764-d344-f037-fb43-d6a01713e3e9@gmail.com> Message-ID: <6e39466b-0e59-941f-ef1f-dd3f1cbc9351@gmail.com> On 01-Apr-19 16:13, Richard Bennett wrote: > Heh, the hub-and-spoke redesign came from IEEE 802.3 Low-cost LAN task group, of which I was a member. Apart from NICs, the economics of coax Ethernet were dominated by labor, wire, transceivers, and fault isolation, all of which were much cheaper with twisted pair, hub-and-spoke, and RJ-45 connectors. In the long run, yes, but when you've already got a few thousand desks on daisy-chained coax, paid for little by little without a central budget, recabling the whole site for UTP-5/RJ-45 and a truckload of Cisco boxes was a fairly large financial shock. I think a lot of campuses went through the same sequence (yellow cable, daisy-chain Cheapernet, UTP-5) in those years. It was the curse of early adopters. Brian > >> On Mar 31, 2019, at 9:08 PM, Brian E Carpenter > wrote: >> >> Yes, but money was an issue and daisy-chained coax really was Cheaper. >> >> Money ceased to be an issue when 100% of the CERN physics community >> (a) insisted on a network connection at every desk, and >> (b) had experienced outages due to some clown breaking >> the Cheapernet daisy chain. >> >> When those conditions were met (about the end of 1995, I think), >> I went to management and got the budget to recable everywhere >> with UTP-5. The biggest and easiest budget request I ever made. >> >> Regards >> ??Brian >> >> On 01-Apr-19 15:37, Richard Bennett wrote: >>> 3Com?s office on Kifer Rd was wired in a hub-and-spoke configuration for thin coax, with each user?s cubicle wired direct to a multi-port repeater. This made it difficult for user to crash the department?s network by unplugging one of their BNC connectors. ?Bus was always a mistake, and not just because offices are designed for hub-and-spoke power and phone wires. But wireless is better now that the speeds are up, so Aloha had it right all along. ? >>> >>>> On Mar 31, 2019, at 6:25 PM, Brian E Carpenter > wrote: >>>> >>>> I don't know the numbers but we had quite a lot at CERN, and several >>>> technicians who became experts at adding a tap. I think Boeing had >>>> a lot too, and Microsoft. >>>> >>>> I do agree that CheaperNet made scaling up a lot easier and made >>>> everything more user-proof. Although we did once have a user (i.e. >>>> a physicist) who discovered ungrounded screens on a bunch of CheaperNet >>>> coax cables, and soldered them all to ground. Of course that created >>>> numerous ground loops and broke everything in his area, since the coax >>>> screen should only be grounded at one end. (There were some areas of >>>> CERN where you could measure ground currents of 30 or 40 amps AC, due >>>> to some very, very big electromagnets that inevitably unbalanced >>>> the 3-phase system.) >>>> >>>> That particular user later became head of CERN's IT Division. >>>> >>>> Regards >>>> ??Brian Carpenter >>>> >>>> On 01-Apr-19 11:56, Scott O. Bradner wrote: >>>>> hmm - we had a few hundred at Harvard at the peak so I find it hard to think that there were only a thousand world wide >>>>> >>>>> Scott >>>>> >>>>>> On Mar 31, 2019, at 6:43 PM, Richard Bennett > wrote: >>>>>> >>>>>> Ethernet is a catchy name, I?ll give you that. >>>>>> >>>>>> 3Com quickly discovered that golden rod cable was a huge mistake and replaced it with thin coax, integrated transceivers, and BNC connectors. In reality, the number of golden rod installations in the whole world never numbered much more than thousand; that was good because there was no way to upgrade them to higher speeds. >>>>>> >>>>>>> On Mar 31, 2019, at 4:23 PM, Scott O. Bradner > wrote: >>>>>>> >>>>>>> might have been crap but there sure was a lot of it, and it worked well enough to dominate the LAN space over >>>>>>> token ring >>>>>>> >>>>>>> 10BaseT (and the multiple pre-standard twisted pair Ethernet systems) expanded the coverage hugely but >>>>>>> a lot of the original yellow cable was deployed >>>>>>> >>>>>>> Scott >>>>>>> >>>>>>>> On Mar 31, 2019, at 5:10 PM, Richard Bennett > wrote: >>>>>>>> >>>>>>>> Ethernet was total crap before 10BASE-T for many reasons. The original 3Com card, the 3C501, used a SEEQ 8001 chip that couldn?t handle incoming packets with less than 80ms interframe gap because it had to handle an interrupt and complete a DMA operation for the first packet before it could start the reception of the next one. The 3Com server NIC, the 3C-505, had an embedded CPU and first chip that actually supported the standard, the 82586. The server card added delay when it knew it was talking to 3C501 so as to avoid making it choke. >>>>>>>> >>>>>>>> AMD?s first Ethernet chip seeded its random number generator (need for the random exponential binary backoff algorithm) at power up, so a short power fail caused stations to synchronize, making collision resolution impossible. >>>>>>>> >>>>>>>> Most packets are generated by servers, so clients close to servers had higher priority than those farther away, an effect of propagation. >>>>>>>> >>>>>>>> All in all, coax Ethernet was a horrible design in practice. >>>>>>>> >>>>>>>> The first really commendable Ethernet chips were the 3Com 3C509 parallel tasking adapters. They took a linked list of buffer fragments from the driver and delivered packets direct to user space from a smallish FIFO in the chip. These cards also predicted interrupt latency, firing off the reception logic in the driver at a time calculated to have the driver running at or before EOF. This made for some fun code when the driver was ready for a packet that wasn?t quite there yet. >>>>>>>> >>>>>>>> The 509 hit the market in 1994, at a time when the tech press liked to run speed/CPU load tests of Ethernet cards and the market for chips was dominated by 3Com and Intel. IIRC, these cards ran at some like 98% of wire speed. >>>>>>>> >>>>>>>> The parallel tasking thing allowed 3Com to register some very valuable patents that outlived the company. Patent trolls ended up exploiting them, which expert witnesses to collect some nice fees until they expired. >>>>>>>> >>>>>>>> 10BASE-T did away with CSMA-CD in favor of a full duplex transmission mode that buffered packets in the hub, did flow control, and ultimately allowed multiple frames to traverse switches at the same time, kinda like SDMA and beam forming in wireless systems. Hilariously, experts from Ivy League universities working for patent trolls continued to claim Ethernet was a half duplex, CSMA-CD system until the bitter end. >>>>>>>> >>>>>>>> RB >>>>>>>> >>>>>>>>> On Mar 30, 2019, at 2:33 PM, Jack Haverty > wrote: >>>>>>>>> >>>>>>>>> The Unix-box problem was the only one I recall. ?However, we did move >>>>>>>>> the testing world onto a separate LAN so anything bad that some random >>>>>>>>> box did wouldn't affect everyone. ?So it may have happened but we didn't >>>>>>>>> care. ??Our mission was to get the software working.... >>>>>>>>> >>>>>>>>> /Jack >>>>>>>>> >>>>>>>>> On 3/30/19 12:41 AM, Olivier MJ Cr?pin-Leblond wrote: >>>>>>>>>> Dear Jack, >>>>>>>>>> >>>>>>>>>> I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000 >>>>>>>>>> compatible cards on the same network. >>>>>>>>>> Having started with Novell & 3COM cards, all on Coax, we found that we >>>>>>>>>> started getting timeouts when we added more cheap NE2000 compatible cards. >>>>>>>>>> Did the same thing with oscilloscopes/analysers and tweaked parameters >>>>>>>>>> to go around this problem. >>>>>>>>>> Warm regards, >>>>>>>>>> >>>>>>>>>> Olivier >>>>>>>>>> >>>>>>>>>> On 30/03/2019 02:57, Jack Haverty wrote: >>>>>>>>>>> I can confirm that there was at least one Unix vendor that violated the >>>>>>>>>>> Ethernet specs (10mb/s). ?I was at Oracle in the early 90s, where we had >>>>>>>>>>> at least one of every common computer so that we could test software. >>>>>>>>>>> >>>>>>>>>>> While testing, we noticed that when one particular type of machine was >>>>>>>>>>> active doing a long bulk transfer, all of the other traffic on our LAN >>>>>>>>>>> slowed to a crawl. ??I was a hardware guy in a software universe, but I >>>>>>>>>>> managed to find one other hardware type, and we scrounged up an >>>>>>>>>>> oscilloscope, and then looked closely at the wire and at the spec. >>>>>>>>>>> >>>>>>>>>>> I don't remember the details, but there was some timer that was supposed >>>>>>>>>>> to have a certain minimum value and that Unix box was consistently >>>>>>>>>>> violating it. ?So it could effectively seize the LAN for as long as it >>>>>>>>>>> had traffic. >>>>>>>>>>> >>>>>>>>>>> Sorry, I can't remember which vendor it was. ?It might have been Sun, or >>>>>>>>>>> maybe one specific model/vintage, since we had a lot of Sun equipment >>>>>>>>>>> but hadn't noticed the problem before. >>>>>>>>>>> >>>>>>>>>>> I suspect there's a lot of such "standards" that are routinely violated >>>>>>>>>>> in the network. ??Putting it on paper and declaring it mandatory doesn't >>>>>>>>>>> make it true. ?Personally I never saw much rigorous certification >>>>>>>>>>> testing or enforcement (not just of Ethernet), and the general >>>>>>>>>>> "robustness" designs can hide bad behavior. >>>>>>>>>>> >>>>>>>>>>> /Jack Haverty >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On 3/29/19 5:40 PM, John Gilmore wrote: >>>>>>>>>>>> Karl Auerbach > wrote: >>>>>>>>>>>>> I recently had someone confirm a widely held belief that Sun >>>>>>>>>>>>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces >>>>>>>>>>>>> to have a winning bias against Ethernet machines that adhered to the >>>>>>>>>>>>> IEEE/DIX ethernet timer values. ?Those of us who tended to work with >>>>>>>>>>>>> networked PC platforms were well aware of the effect of putting a Sun >>>>>>>>>>>>> onto the same Ethernet: what had worked before stopped working, but >>>>>>>>>>>>> the Suns all chatted among themselves quite happily. >>>>>>>>>>>> Are we talking about 10 Mbit Ethernet, or something later? >>>>>>>>>>>> >>>>>>>>>>>> I worked at Sun back then. ?Sun was shipping products with Ethernet >>>>>>>>>>>> before the IBM PC even existed. ?Sun products used standard Ethernet >>>>>>>>>>>> chips. ?Some of those chips were super customizable via internal >>>>>>>>>>>> registers (I have a T1 card that uses an Ethernet chip with settings >>>>>>>>>>>> that let it talk telco T1/DS1 protocol!), but Sun always set them to >>>>>>>>>>>> meet the standard specs. ?What evidence is there of any non-standard >>>>>>>>>>>> settings? >>>>>>>>>>>> >>>>>>>>>>>> What Sun did differently was that we tuned the implementation so it >>>>>>>>>>>> could actually send and receive back-to-back packets, at the minimum >>>>>>>>>>>> specified inter-packet gaps. ?By building both the hardware and the >>>>>>>>>>>> software ourselves (like Apple today, and unlike Microsoft), we were >>>>>>>>>>>> able to work out all the kinks to maximize performance. ?We could >>>>>>>>>>>> improve everything: software drivers, interrupt latencies, TCP/IP >>>>>>>>>>>> stacks, DMA bus arbitration overhead. ?Sun was the first to do >>>>>>>>>>>> production shared disk-drive access over Ethernet, to reduce the cost of >>>>>>>>>>>> our "diskless" workstations. ?In sending 4Kbyte filesystem blocks among >>>>>>>>>>>> client and server, we sent an IP-fragmented 4K+ UDP datagram in three >>>>>>>>>>>> BACK-TO-BACK Ethernet packets. >>>>>>>>>>>> >>>>>>>>>>>> Someone, I think it was Van Jacobson, did some early work on maximizing >>>>>>>>>>>> Ethernet thruput, and reported it at USENIX conferences. ?His >>>>>>>>>>>> observation was that to get maximal thruput, you needed 3 things to be >>>>>>>>>>>> happening absolutely simultaneously: the sender processing & queueing >>>>>>>>>>>> the next packet; the Ethernet wire moving the current packet; the >>>>>>>>>>>> receiver dequeueing and processing the previous packet. ?If any of these >>>>>>>>>>>> operations took longer than the others, then that would be the limiting >>>>>>>>>>>> factor in the thruput. ?This applies to half duplex operation (only one >>>>>>>>>>>> side transmits at a time); the end node processing requirement doubles >>>>>>>>>>>> if you run full duplex data in both directions (on more modern Ethernets >>>>>>>>>>>> that support that) . >>>>>>>>>>>> >>>>>>>>>>>> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his >>>>>>>>>>>> favorite things to work on was network performance. ?Here's one of his >>>>>>>>>>>> signature blocks from 1996: >>>>>>>>>>>> >>>>>>>>>>>> Yow! 11.26 MB/s remote host TCP bandwidth & //// >>>>>>>>>>>> 199 usec remote TCP latency over 100Mb/s ??//// >>>>>>>>>>>> ethernet. ?Beat that! ????????????????????//// >>>>>>>>>>>> -----------------------------------------////__________ ?o >>>>>>>>>>>> David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< >>>>>>>>>>>> >>>>>>>>>>>> My guess is that the ISA cards of the day had never even *seen* back to >>>>>>>>>>>> back Ethernet packets (with only the 9.6 uSec interframe spacing between >>>>>>>>>>>> them), so of course they weren't tested to be able to handle them. ?The >>>>>>>>>>>> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so >>>>>>>>>>>> most cards just had one or two packet buffers. ?And if the CPU didn't >>>>>>>>>>>> immediately grab one of those received buffers, then the next packet >>>>>>>>>>>> would get dropped for lack of a buffer to put it in. ?In sending, you >>>>>>>>>>>> had to have the second buffer queued long before the inter-packet gap, or >>>>>>>>>>>> you wouldn't send with minimum packet spacing on the wire. ?Most PC >>>>>>>>>>>> operating systems couldn't do that. ?And if your card was slower than >>>>>>>>>>>> the standard 9.6usec inter-packet gap after sensing carrier, >>>>>>>>>>>> then any Sun waiting to transmit would beat your card to the wire, >>>>>>>>>>>> deferring your card's transmission. >>>>>>>>>>>> >>>>>>>>>>>> You may have also been seeing the "Channel capture effect"; see: >>>>>>>>>>>> >>>>>>>>>>>> https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect >>>>>>>>>>>> >>>>>>>>>>>> John >>>>>>>>>>>> >>>>>>>>>> >>>>>>>>> _______ >>>>>>>>> internet-history mailing list >>>>>>>>> internet-history at postel.org >>>>>>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>>>>>> Contact list-owner at postel.org for assistance. >>>>>>>> >>>>>>>> ? >>>>>>>> Richard Bennett >>>>>>>> High Tech Forum Founder >>>>>>>> Ethernet & Wi-Fi standards co-creator >>>>>>>> >>>>>>>> Internet Policy Consultant >>>>>>>> >>>>>>>> _______ >>>>>>>> internet-history mailing list >>>>>>>> internet-history at postel.org >>>>>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>>>>> Contact list-owner at postel.org for assistance. >>>>>>> >>>>>>> >>>>>>> _______ >>>>>>> internet-history mailing list >>>>>>> internet-history at postel.org >>>>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>>>> Contact list-owner at postel.org for assistance. >>>>>> >>>>>> ? >>>>>> Richard Bennett >>>>>> High Tech Forum Founder >>>>>> Ethernet & Wi-Fi standards co-creator >>>>>> >>>>>> Internet Policy Consultant >>>>>> >>>>> >>>>> >>>>> _______ >>>>> internet-history mailing list >>>>> internet-history at postel.org >>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>> Contact list-owner at postel.org for assistance. >>>>> >>>> >>> >>> ? >>> Richard Bennett >>> High Tech Forum ?Founder >>> Ethernet & Wi-Fi standards co-creator >>> >>> Internet Policy Consultant >>> >> > > ? > Richard Bennett > High Tech Forum ?Founder > Ethernet & Wi-Fi standards co-creator > > Internet Policy Consultant > From casner at acm.org Sun Mar 31 21:40:20 2019 From: casner at acm.org (Stephen Casner) Date: Sun, 31 Mar 2019 21:40:20 -0700 (PDT) Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <6e39466b-0e59-941f-ef1f-dd3f1cbc9351@gmail.com> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> <41917325-958c-3289-feb6-74d4aa1bd025@gmail.com> <5864E92C-FBF9-4B9D-8F26-3AC4DFFAE662@bennett.com> <4f4ce764-d344-f037-fb43-d6a01713e3e9@gmail.com> <6e39466b-0e59-941f-ef1f-dd3f1cbc9351@gmail.com> Message-ID: On Mon, 1 Apr 2019, Brian E Carpenter wrote: > I think a lot of campuses went through the same sequence (yellow > cable, daisy-chain Cheapernet, UTP-5) in those years. It was the > curse of early adopters. Indeed, ISI evolved through those technologies. -- Steve From richard at bennett.com Sun Mar 31 21:47:43 2019 From: richard at bennett.com (Richard Bennett) Date: Sun, 31 Mar 2019 22:47:43 -0600 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: <6e39466b-0e59-941f-ef1f-dd3f1cbc9351@gmail.com> References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> <41917325-958c-3289-feb6-74d4aa1bd025@gmail.com> <5864E92C-FBF9-4B9D-8F26-3AC4DFFAE662@bennett.com> <4f4ce764-d344-f037-fb43-d6a01713e3e9@gmail.com> <6e39466b-0e59-941f-ef1f-dd3f1cbc9351@gmail.com> Message-ID: <7258189A-BE1A-4F59-A9D2-F7F2C5998BA7@bennett.com> A rip-and-replace upgrade was literally the only way to get from 10Mbps Bluebook Ethernet to 100Mbps Ethernet. > On Mar 31, 2019, at 9:31 PM, Brian E Carpenter wrote: > > On 01-Apr-19 16:13, Richard Bennett wrote: >> Heh, the hub-and-spoke redesign came from IEEE 802.3 Low-cost LAN task group, of which I was a member. Apart from NICs, the economics of coax Ethernet were dominated by labor, wire, transceivers, and fault isolation, all of which were much cheaper with twisted pair, hub-and-spoke, and RJ-45 connectors. > In the long run, yes, but when you've already got a few thousand desks on daisy-chained coax, paid for little by little without a central budget, recabling the whole site for UTP-5/RJ-45 and a truckload of Cisco boxes was a fairly large financial shock. > > I think a lot of campuses went through the same sequence (yellow cable, daisy-chain Cheapernet, UTP-5) in those years. It was the curse of early adopters. > > Brian > >> >>> On Mar 31, 2019, at 9:08 PM, Brian E Carpenter > wrote: >>> >>> Yes, but money was an issue and daisy-chained coax really was Cheaper. >>> >>> Money ceased to be an issue when 100% of the CERN physics community >>> (a) insisted on a network connection at every desk, and >>> (b) had experienced outages due to some clown breaking >>> the Cheapernet daisy chain. >>> >>> When those conditions were met (about the end of 1995, I think), >>> I went to management and got the budget to recable everywhere >>> with UTP-5. The biggest and easiest budget request I ever made. >>> >>> Regards >>> Brian >>> >>> On 01-Apr-19 15:37, Richard Bennett wrote: >>>> 3Com?s office on Kifer Rd was wired in a hub-and-spoke configuration for thin coax, with each user?s cubicle wired direct to a multi-port repeater. This made it difficult for user to crash the department?s network by unplugging one of their BNC connectors. Bus was always a mistake, and not just because offices are designed for hub-and-spoke power and phone wires. But wireless is better now that the speeds are up, so Aloha had it right all along. >>>> >>>>> On Mar 31, 2019, at 6:25 PM, Brian E Carpenter > wrote: >>>>> >>>>> I don't know the numbers but we had quite a lot at CERN, and several >>>>> technicians who became experts at adding a tap. I think Boeing had >>>>> a lot too, and Microsoft. >>>>> >>>>> I do agree that CheaperNet made scaling up a lot easier and made >>>>> everything more user-proof. Although we did once have a user (i.e. >>>>> a physicist) who discovered ungrounded screens on a bunch of CheaperNet >>>>> coax cables, and soldered them all to ground. Of course that created >>>>> numerous ground loops and broke everything in his area, since the coax >>>>> screen should only be grounded at one end. (There were some areas of >>>>> CERN where you could measure ground currents of 30 or 40 amps AC, due >>>>> to some very, very big electromagnets that inevitably unbalanced >>>>> the 3-phase system.) >>>>> >>>>> That particular user later became head of CERN's IT Division. >>>>> >>>>> Regards >>>>> Brian Carpenter >>>>> >>>>> On 01-Apr-19 11:56, Scott O. Bradner wrote: >>>>>> hmm - we had a few hundred at Harvard at the peak so I find it hard to think that there were only a thousand world wide >>>>>> >>>>>> Scott >>>>>> >>>>>>> On Mar 31, 2019, at 6:43 PM, Richard Bennett > wrote: >>>>>>> >>>>>>> Ethernet is a catchy name, I?ll give you that. >>>>>>> >>>>>>> 3Com quickly discovered that golden rod cable was a huge mistake and replaced it with thin coax, integrated transceivers, and BNC connectors. In reality, the number of golden rod installations in the whole world never numbered much more than thousand; that was good because there was no way to upgrade them to higher speeds. >>>>>>> >>>>>>>> On Mar 31, 2019, at 4:23 PM, Scott O. Bradner > wrote: >>>>>>>> >>>>>>>> might have been crap but there sure was a lot of it, and it worked well enough to dominate the LAN space over >>>>>>>> token ring >>>>>>>> >>>>>>>> 10BaseT (and the multiple pre-standard twisted pair Ethernet systems) expanded the coverage hugely but >>>>>>>> a lot of the original yellow cable was deployed >>>>>>>> >>>>>>>> Scott >>>>>>>> >>>>>>>>> On Mar 31, 2019, at 5:10 PM, Richard Bennett > wrote: >>>>>>>>> >>>>>>>>> Ethernet was total crap before 10BASE-T for many reasons. The original 3Com card, the 3C501, used a SEEQ 8001 chip that couldn?t handle incoming packets with less than 80ms interframe gap because it had to handle an interrupt and complete a DMA operation for the first packet before it could start the reception of the next one. The 3Com server NIC, the 3C-505, had an embedded CPU and first chip that actually supported the standard, the 82586. The server card added delay when it knew it was talking to 3C501 so as to avoid making it choke. >>>>>>>>> >>>>>>>>> AMD?s first Ethernet chip seeded its random number generator (need for the random exponential binary backoff algorithm) at power up, so a short power fail caused stations to synchronize, making collision resolution impossible. >>>>>>>>> >>>>>>>>> Most packets are generated by servers, so clients close to servers had higher priority than those farther away, an effect of propagation. >>>>>>>>> >>>>>>>>> All in all, coax Ethernet was a horrible design in practice. >>>>>>>>> >>>>>>>>> The first really commendable Ethernet chips were the 3Com 3C509 parallel tasking adapters. They took a linked list of buffer fragments from the driver and delivered packets direct to user space from a smallish FIFO in the chip. These cards also predicted interrupt latency, firing off the reception logic in the driver at a time calculated to have the driver running at or before EOF. This made for some fun code when the driver was ready for a packet that wasn?t quite there yet. >>>>>>>>> >>>>>>>>> The 509 hit the market in 1994, at a time when the tech press liked to run speed/CPU load tests of Ethernet cards and the market for chips was dominated by 3Com and Intel. IIRC, these cards ran at some like 98% of wire speed. >>>>>>>>> >>>>>>>>> The parallel tasking thing allowed 3Com to register some very valuable patents that outlived the company. Patent trolls ended up exploiting them, which expert witnesses to collect some nice fees until they expired. >>>>>>>>> >>>>>>>>> 10BASE-T did away with CSMA-CD in favor of a full duplex transmission mode that buffered packets in the hub, did flow control, and ultimately allowed multiple frames to traverse switches at the same time, kinda like SDMA and beam forming in wireless systems. Hilariously, experts from Ivy League universities working for patent trolls continued to claim Ethernet was a half duplex, CSMA-CD system until the bitter end. >>>>>>>>> >>>>>>>>> RB >>>>>>>>> >>>>>>>>>> On Mar 30, 2019, at 2:33 PM, Jack Haverty > wrote: >>>>>>>>>> >>>>>>>>>> The Unix-box problem was the only one I recall. However, we did move >>>>>>>>>> the testing world onto a separate LAN so anything bad that some random >>>>>>>>>> box did wouldn't affect everyone. So it may have happened but we didn't >>>>>>>>>> care. Our mission was to get the software working.... >>>>>>>>>> >>>>>>>>>> /Jack >>>>>>>>>> >>>>>>>>>> On 3/30/19 12:41 AM, Olivier MJ Cr?pin-Leblond wrote: >>>>>>>>>>> Dear Jack, >>>>>>>>>>> >>>>>>>>>>> I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000 >>>>>>>>>>> compatible cards on the same network. >>>>>>>>>>> Having started with Novell & 3COM cards, all on Coax, we found that we >>>>>>>>>>> started getting timeouts when we added more cheap NE2000 compatible cards. >>>>>>>>>>> Did the same thing with oscilloscopes/analysers and tweaked parameters >>>>>>>>>>> to go around this problem. >>>>>>>>>>> Warm regards, >>>>>>>>>>> >>>>>>>>>>> Olivier >>>>>>>>>>> >>>>>>>>>>> On 30/03/2019 02:57, Jack Haverty wrote: >>>>>>>>>>>> I can confirm that there was at least one Unix vendor that violated the >>>>>>>>>>>> Ethernet specs (10mb/s). I was at Oracle in the early 90s, where we had >>>>>>>>>>>> at least one of every common computer so that we could test software. >>>>>>>>>>>> >>>>>>>>>>>> While testing, we noticed that when one particular type of machine was >>>>>>>>>>>> active doing a long bulk transfer, all of the other traffic on our LAN >>>>>>>>>>>> slowed to a crawl. I was a hardware guy in a software universe, but I >>>>>>>>>>>> managed to find one other hardware type, and we scrounged up an >>>>>>>>>>>> oscilloscope, and then looked closely at the wire and at the spec. >>>>>>>>>>>> >>>>>>>>>>>> I don't remember the details, but there was some timer that was supposed >>>>>>>>>>>> to have a certain minimum value and that Unix box was consistently >>>>>>>>>>>> violating it. So it could effectively seize the LAN for as long as it >>>>>>>>>>>> had traffic. >>>>>>>>>>>> >>>>>>>>>>>> Sorry, I can't remember which vendor it was. It might have been Sun, or >>>>>>>>>>>> maybe one specific model/vintage, since we had a lot of Sun equipment >>>>>>>>>>>> but hadn't noticed the problem before. >>>>>>>>>>>> >>>>>>>>>>>> I suspect there's a lot of such "standards" that are routinely violated >>>>>>>>>>>> in the network. Putting it on paper and declaring it mandatory doesn't >>>>>>>>>>>> make it true. Personally I never saw much rigorous certification >>>>>>>>>>>> testing or enforcement (not just of Ethernet), and the general >>>>>>>>>>>> "robustness" designs can hide bad behavior. >>>>>>>>>>>> >>>>>>>>>>>> /Jack Haverty >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On 3/29/19 5:40 PM, John Gilmore wrote: >>>>>>>>>>>>> Karl Auerbach > wrote: >>>>>>>>>>>>>> I recently had someone confirm a widely held belief that Sun >>>>>>>>>>>>>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces >>>>>>>>>>>>>> to have a winning bias against Ethernet machines that adhered to the >>>>>>>>>>>>>> IEEE/DIX ethernet timer values. Those of us who tended to work with >>>>>>>>>>>>>> networked PC platforms were well aware of the effect of putting a Sun >>>>>>>>>>>>>> onto the same Ethernet: what had worked before stopped working, but >>>>>>>>>>>>>> the Suns all chatted among themselves quite happily. >>>>>>>>>>>>> Are we talking about 10 Mbit Ethernet, or something later? >>>>>>>>>>>>> >>>>>>>>>>>>> I worked at Sun back then. Sun was shipping products with Ethernet >>>>>>>>>>>>> before the IBM PC even existed. Sun products used standard Ethernet >>>>>>>>>>>>> chips. Some of those chips were super customizable via internal >>>>>>>>>>>>> registers (I have a T1 card that uses an Ethernet chip with settings >>>>>>>>>>>>> that let it talk telco T1/DS1 protocol!), but Sun always set them to >>>>>>>>>>>>> meet the standard specs. What evidence is there of any non-standard >>>>>>>>>>>>> settings? >>>>>>>>>>>>> >>>>>>>>>>>>> What Sun did differently was that we tuned the implementation so it >>>>>>>>>>>>> could actually send and receive back-to-back packets, at the minimum >>>>>>>>>>>>> specified inter-packet gaps. By building both the hardware and the >>>>>>>>>>>>> software ourselves (like Apple today, and unlike Microsoft), we were >>>>>>>>>>>>> able to work out all the kinks to maximize performance. We could >>>>>>>>>>>>> improve everything: software drivers, interrupt latencies, TCP/IP >>>>>>>>>>>>> stacks, DMA bus arbitration overhead. Sun was the first to do >>>>>>>>>>>>> production shared disk-drive access over Ethernet, to reduce the cost of >>>>>>>>>>>>> our "diskless" workstations. In sending 4Kbyte filesystem blocks among >>>>>>>>>>>>> client and server, we sent an IP-fragmented 4K+ UDP datagram in three >>>>>>>>>>>>> BACK-TO-BACK Ethernet packets. >>>>>>>>>>>>> >>>>>>>>>>>>> Someone, I think it was Van Jacobson, did some early work on maximizing >>>>>>>>>>>>> Ethernet thruput, and reported it at USENIX conferences. His >>>>>>>>>>>>> observation was that to get maximal thruput, you needed 3 things to be >>>>>>>>>>>>> happening absolutely simultaneously: the sender processing & queueing >>>>>>>>>>>>> the next packet; the Ethernet wire moving the current packet; the >>>>>>>>>>>>> receiver dequeueing and processing the previous packet. If any of these >>>>>>>>>>>>> operations took longer than the others, then that would be the limiting >>>>>>>>>>>>> factor in the thruput. This applies to half duplex operation (only one >>>>>>>>>>>>> side transmits at a time); the end node processing requirement doubles >>>>>>>>>>>>> if you run full duplex data in both directions (on more modern Ethernets >>>>>>>>>>>>> that support that) . >>>>>>>>>>>>> >>>>>>>>>>>>> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his >>>>>>>>>>>>> favorite things to work on was network performance. Here's one of his >>>>>>>>>>>>> signature blocks from 1996: >>>>>>>>>>>>> >>>>>>>>>>>>> Yow! 11.26 MB/s remote host TCP bandwidth & //// >>>>>>>>>>>>> 199 usec remote TCP latency over 100Mb/s //// >>>>>>>>>>>>> ethernet. Beat that! //// >>>>>>>>>>>>> -----------------------------------------////__________ o >>>>>>>>>>>>> David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ >< >>>>>>>>>>>>> >>>>>>>>>>>>> My guess is that the ISA cards of the day had never even *seen* back to >>>>>>>>>>>>> back Ethernet packets (with only the 9.6 uSec interframe spacing between >>>>>>>>>>>>> them), so of course they weren't tested to be able to handle them. The >>>>>>>>>>>>> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so >>>>>>>>>>>>> most cards just had one or two packet buffers. And if the CPU didn't >>>>>>>>>>>>> immediately grab one of those received buffers, then the next packet >>>>>>>>>>>>> would get dropped for lack of a buffer to put it in. In sending, you >>>>>>>>>>>>> had to have the second buffer queued long before the inter-packet gap, or >>>>>>>>>>>>> you wouldn't send with minimum packet spacing on the wire. Most PC >>>>>>>>>>>>> operating systems couldn't do that. And if your card was slower than >>>>>>>>>>>>> the standard 9.6usec inter-packet gap after sensing carrier, >>>>>>>>>>>>> then any Sun waiting to transmit would beat your card to the wire, >>>>>>>>>>>>> deferring your card's transmission. >>>>>>>>>>>>> >>>>>>>>>>>>> You may have also been seeing the "Channel capture effect"; see: >>>>>>>>>>>>> >>>>>>>>>>>>> https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect >>>>>>>>>>>>> >>>>>>>>>>>>> John >>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> _______ >>>>>>>>>> internet-history mailing list >>>>>>>>>> internet-history at postel.org >>>>>>>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>>>>>>> Contact list-owner at postel.org for assistance. >>>>>>>>> >>>>>>>>> ? >>>>>>>>> Richard Bennett >>>>>>>>> High Tech Forum Founder >>>>>>>>> Ethernet & Wi-Fi standards co-creator >>>>>>>>> >>>>>>>>> Internet Policy Consultant >>>>>>>>> >>>>>>>>> _______ >>>>>>>>> internet-history mailing list >>>>>>>>> internet-history at postel.org >>>>>>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>>>>>> Contact list-owner at postel.org for assistance. >>>>>>>> >>>>>>>> >>>>>>>> _______ >>>>>>>> internet-history mailing list >>>>>>>> internet-history at postel.org >>>>>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>>>>> Contact list-owner at postel.org for assistance. >>>>>>> >>>>>>> ? >>>>>>> Richard Bennett >>>>>>> High Tech Forum Founder >>>>>>> Ethernet & Wi-Fi standards co-creator >>>>>>> >>>>>>> Internet Policy Consultant >>>>>>> >>>>>> >>>>>> >>>>>> _______ >>>>>> internet-history mailing list >>>>>> internet-history at postel.org >>>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>>> Contact list-owner at postel.org for assistance. >>>>>> >>>>> >>>> >>>> ? >>>> Richard Bennett >>>> High Tech Forum Founder >>>> Ethernet & Wi-Fi standards co-creator >>>> >>>> Internet Policy Consultant >>>> >>> >> >> ? >> Richard Bennett >> High Tech Forum Founder >> Ethernet & Wi-Fi standards co-creator >> >> Internet Policy Consultant >> > ? Richard Bennett High Tech Forum Founder Ethernet & Wi-Fi standards co-creator Internet Policy Consultant -------------- next part -------------- An HTML attachment was scrubbed... URL: From karl at cavebear.com Sun Mar 31 23:01:17 2019 From: karl at cavebear.com (Karl Auerbach) Date: Sun, 31 Mar 2019 23:01:17 -0700 Subject: [ih] Internet History - from Community to Big Tech? In-Reply-To: References: <8b9297df-9305-b1d9-4ad2-119195a26def@3kitty.org> <92abc9fe-639f-8875-ee0a-0bc70ce08179@3kitty.org> <7ba7c739-7904-7c98-ea60-f439b49f9532@cavebear.com> <30603.1553906453@hop.toad.com> <993f30a4-19ea-70f5-4e44-32f0d1b7b36d@3kitty.org> <20860fea-c243-77ca-95d4-9850a98ca808@3kitty.org> <96BAF843-7E73-4FE0-9930-4166AEB94488@bennett.com> <466E5220-FF9B-48AC-A9B5-51B7C973974C@sobco.com> <41917325-958c-3289-feb6-74d4aa1bd025@gmail.com> <5864E92C-FBF9-4B9D-8F26-3AC4DFFAE662@bennett.com> <4f4ce764-d344-f037-fb43-d6a01713e3e9@gmail.com> Message-ID: <10855b37-be56-79ce-d864-65b6c6fe29f1@cavebear.com> On 3/31/19 8:13 PM, Richard Bennett wrote: > Heh, the hub-and-spoke redesign came from IEEE 802.3 Low-cost LAN task > group, of which I was a member. Apart from NICs, the economics of coax > Ethernet were dominated by labor, wire, transceivers, and fault > isolation, all of which were much cheaper with twisted pair, > hub-and-spoke, and RJ-45 connectors. Do you happen to know how the Synoptics Lattisnet/Astranet pre-cursor to 10-base-T came about?? Their stuff was very similar to what eventually came out of IEEE.? I was under the impression that the founders of Synoptics kinda had the basic idea of doing an ethernet-thing using phone wire in a star arrangement.? Am I mis-remembering? And yes, coax, of any of its forms, was expensive to buy, expensive to install, subject outages caused by a single flawed connector or stations, and horribly expensive to diagnose and repair.? (But even the original 10-base-T stuff I used from David Systems and Synoptics had that awful AUI slide connector.) ??? --karl--