[ih] Internet History - from Community to Big Tech?
Brian E Carpenter
brian.e.carpenter at gmail.com
Thu Mar 28 17:17:47 PDT 2019
On 29-Mar-19 11:33, Karl Auerbach wrote:
>
> On 3/27/19 7:16 PM, Jack Haverty wrote:
>
>> The "packet driver" standardization may have made it easier for all
>> those people to write their TCP stacks -- but there was no such
>> standardization at the next level - the APIs that allowed an app to use
>> those stacks. So we needed different code for each TCP stack.
>
> The Winsock API evolved on the top side of many of the TCP stacks.
> Winsock was a distant relative of the Unix socket API (and when I say
> "distant" I mean "they could almost see one another on a clear day with
> a good telescope"). If I remember correctly there was a vendor
> consortium to work on making sure that Winsock was clear and solid. I
> do remember running some Winsock interoperability bake-offs.
Yes, all of that. But it remains a daily problem that Winsock2 is
incompatible with the POSIX socket API, and there are some subtle
and not-so-subtle discrepancies that make software portability a
real problem to this day, when you are trying to do anything
even slightly off the beaten track.
>> I wonder if that is where the boundary starts between interoperability
>> and walled gardens
No, I don't think so, not any more. But as far as I'm concerned that
isn't a history topic... Oh, all right, I mean:
https://tools.ietf.org/html/draft-carpenter-limited-domains
Brian
>
> At the Interop shows, especially in the earlier days, we (the team that
> built and ran the show net) really beat up on vendors that were not
> interoperable. I remember at least one case where we simply unplugged a
> router/switch vendor because they were not playing nice.
>
> We always pre-built and pre-tested the main show network (45.x.x.x/8) in
> a warehouse a couple of months before the show. That way we had
> everything relatively solid before we loaded up the trucks (and we
> filled a lot of trucks - I remember once we filled 43 large semitrailers
> - and that was just for our own gear, not the vendors'.)
>
> And wow, did we ever find some pathological non-interoperation. But
> sometimes the cause was relatively innocent - as one instance having to
> do with a difference of interpretation regarding the forwarding of IP
> multicast packets between Cisco and Wellfleet routers that ended up
> causing us an infinite ethernet frame loop. And once our FDDI expert -
> Merike Kaeo - found a specification flaw in FDDI physical layer stuff:
> The various vendors came up with a fix on the spot and were blasting new
> firmware into PROMs in their hotel rooms.
>
>
> - i.e., where people take advantage of the "lower"
>> uniformity brought by some standard (whether in spec or in code), but
>> fail to coordinate standardization at the level "above" them, where they
>> present their services to the next guy up. By maintaining uniqueness,
>> they hope their walled garden will be the one to thrive.
>
> I recently had someone confirm a widely held belief that Sun
> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces
> to have a winning bias against Ethernet machines that adhered to the
> IEEE/DIX ethernet timer values. Those of us who tended to work with
> networked PC platforms were well aware of the effect of putting a Sun
> onto the same Ethernet: what had worked before stopped working, but the
> Suns all chatted among themselves quite happily.
>
> And FTP Software used to put its license key information in the part of
> Ethernet frames between the end of an ARP and the end of the data of the
> Ethernet frame. That caused a lot of strange side effects. (One can
> still send a lot of IP stacks into death spirals by putting an IPv4/v6
> packet into an Ethernet frame that is larger than the minimum needed to
> hold the IP packet - a lot of deployed code still incorrectly uses the
> received frame size to impute the length of the IP packet rather than
> looking at the IP header.)
>
> And FTP software also realized that with IP fragmentation the receiver
> really does not know how big a buffer will ultimately be required until
> the last fragment arrives. So they altered their IP stack to send the
> last fragment first. That had the effect of causing all of their
> competitor Netmanage stacks to crash when they got a last-fragement-first.
>
> --karl--
> _______
> internet-history mailing list
> internet-history at postel.org
> http://mailman.postel.org/mailman/listinfo/internet-history
> Contact list-owner at postel.org for assistance.
>
More information about the Internet-history
mailing list