[ih] GOSIP & compliance

John Day jeanjour at comcast.net
Sat Apr 2 18:19:21 PDT 2022


Thanks,

I can see that it is an argument for it. Pretty weak in my estimation, but it seems to misunderstand the problem.
Given that was at the height of the silly ‘everything has to be connectionless’ fad, I am sure that had a lot to do with it too. With the overhead of BER that didn’t leave a lot of room for much useful in the packet.

So you send a flurry of UDP packets and some get through and you hope that the ones that do are useful.  Doesn’t sound great. Now if this flurry is commands to agents, the likelihood of random commands arriving in some order sounds more dangerous than useful. It could easily make a bad situation worse. If they are responses, then the manager is getting bits and pieces of information (and small ones at that) and trying to assemble some sort of picture of what is going on.

GetNext was a pretty lame excuse for being able to retrieve a large structure in one operation. Sure it generated more than 2 packets but considerably less than GetNext would require and the Get/Read would be a snapshot, where with GetNext the information could change out from under you.

Let me tell a story:
In the 70s and early 80s, this was called network control. Starting with the ARPANET, I was always a firm believer that management was more appropriate, that events in the network were happening too fast for a human in the loop. The most one could do was *manage* it. To capture that, by 1984 I had adopted the phrase, “Monitor and Repair, but not Control.” 

I was giving a talk at Motorola cellular once and used that line. This was when Motorola had the entire UK cellular system. The old guys in the front row insisted *they* controlled *their* networks. Having heard this sort of thing before, I demurred. When a young engineer in the back of the room who had baby sat the UK system for 3 years, piped up and said he thought he knew what I meant. He related an incident in the UK where the number of switch crashes dropped off precipitously for 6 weeks and then came back up. They couldn’t figure it out. They hadn’t made any configuration changes, changed anything. Then they realized it was the 6 weeks the operators had been on strike! ;-) The network did much better when the operators weren’t trying to control it!  ;-)  Most of the time, the network would do better on its own than with operator help.

The UDP argument would seem to belong more in the old model of network control.  If things have gotten so bad that the UDP argument might be useful, the battle has already been lost. Something, probably a lot of something, wasn’t done earlier.

The network is critical to the business, whether it is data, electricity, pipeline, water, etc. (They are remarkably similar.) As much uncertainty as possible needs to driven out of the task as possible and contingencies for that that can't. Network Management has to be one of the most boring tasks in networking. ;-)

All in all, it is nice to understand the argument, but I don’t buy it. Reliable transport for request/responses, and connectionless for the event stream.

Thanks again,
John


> On Apr 2, 2022, at 15:05, Craig Partridge <craig at tereschau.net> wrote:
> 
> Hi John:
> 
> The answer at the time ran as follows.  X number of datagrams are sent from the monitored station to the monitoring center (let's say X is 4) and all but one are discarded in the routers due to congestion loss.
> 
> In UDP, 1 datagram gets through -- one hopes with useful data -- in a timely way.
> 
> In TCP, unless the 1 datagram that gets through is the first datagram, you get nothing until the missing datagrams arrive to move the window forward.  So you get substantially delayed or not data, as in many cases in the 1980s, the connection would instead fail.
> 
> Craig
> 
> On Sat, Apr 2, 2022 at 11:26 AM John Day via Internet-history <internet-history at elists.isoc.org <mailto:internet-history at elists.isoc.org>> wrote:
> Please explain how UDP packets are less susceptible to congestion than TCP packets?  I would really like to know.
> 
> 
> 
> > On Apr 2, 2022, at 12:41, Greg Skinner <gregskinner0 at icloud.com <mailto:gregskinner0 at icloud.com>> wrote:
> > 
> > 
> >> On Mar 28, 2022, at 11:19 AM, John Day <jeanjour at comcast.net <mailto:jeanjour at comcast.net> <mailto:jeanjour at comcast.net <mailto:jeanjour at comcast.net>>> wrote:
> >> 
> >> Just to add to the comments,
> >> 
> >>> On Mar 28, 2022, at 12:48, Craig Partridge via Internet-history <internet-history at elists.isoc.org <http://elists.isoc.org/> <http://elists.isoc.org/ <http://elists.isoc.org/>>> wrote:
> >>> 
> >>> The UDP vs. TCP debate was pretty fierce and the experience of the time
> >>> came down firmly on the UDP side. Recall this was the era of daily
> >>> congestion collapse of the Internet between 1987 and 1990.
> >> 
> >> Somehow this argument (which I know was intense at the time) is the most absurd. All of the functions in TCP that are relevant are feedback functions that only involve the source and destination. In between, the handling of UDP and TCP packets by the routers is the same. If anything, TCP packets with congestion control have a better chance of being received and a TCP solution would have required fewer packets be generated in the first place. (The last thing a management system should be doing when things go bad is generating lots of traffic, but SNMP was good at that.)
> > 
> > No argument from me about management systems generating too much traffic.  However, regarding congestion control, around 1987, mitigation was underway, but solutions that were widely deployed in TCP/IP implementations were still a few years off.  That helped make UDP more attractive, at least in the short term.
> > 
> > […]
> > 
> >>> There was a network management project in the late 1980s, name now eludes
> >>> me but led by Jil Wescott and DARPA funded, that sound similar in goals to
> >>> what Jack H. describes doing at Oracle.  I leaned on wisdom from those
> >>> folks (esp. the late Charlie Lynn) as Glenn Trewitt and I sought to figure
> >>> out what HEMS should look like.
> > 
> > Right.  From the tcp-ip mailing list and Usenet newsgroup, January 1987:
> > 
> > ——
> > 
> > Date:      Tue, 20-Jan-87 12:12:04 EST
> > From:      leiner at ICARUS.RIACS.EDU <mailto:leiner at ICARUS.RIACS.EDU> <mailto:leiner at ICARUS.RIACS.EDU <mailto:leiner at ICARUS.RIACS.EDU>>
> > To:        mod.protocols.tcp-ip
> > Subject:   Re: Gateway Monitoring
> > 
> > Craig,
> > 
> > As you probably are aware, there has been quite a bit of work done
> > already in "monitoring".  In fact, Jil Westcott at BBN has been doing
> > some work in automated network monitoring related to ADDCOMPE and packet
> > radio networks.  There have also been several proposals for "monitoring
> > protocols".
> > 
> > I'm happy to see you working in this area.  It is clearly critical for
> > large internets like NSFnet and the evolving national research internet.
> > Hopefully, with this new push, a "standard approach" can be developed.
> > 
> > Barry
> > 
> > ——
> > 
> > BTW, interested readers can see discussions on this and related topics at the ban.ai <http://ban.ai/> TCP/IP mailing list archive <https://ban.ai/multics/non-multics-docs/tcpip-digest/sd-archive/archive/ <https://ban.ai/multics/non-multics-docs/tcpip-digest/sd-archive/archive/>>.  (You may get a message indicating SSL certificates have expired.)
> > 
> > —gregbo
> > 
> > 
> 
> -- 
> Internet-history mailing list
> Internet-history at elists.isoc.org <mailto:Internet-history at elists.isoc.org>
> https://elists.isoc.org/mailman/listinfo/internet-history <https://elists.isoc.org/mailman/listinfo/internet-history>
> 
> 
> -- 
> *****
> Craig Partridge's email account for professional society activities and mailing lists.




More information about the Internet-history mailing list