<div dir="ltr"><div class="gmail_extra"><br><div class="gmail_quote">On Wed, May 21, 2014 at 5:06 PM, Noel Chiappa <span dir="ltr"><<a href="mailto:jnc@mercury.lcs.mit.edu" target="_blank">jnc@mercury.lcs.mit.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div id=":1gr" class="a3s" style="overflow:hidden">Here is the 'SQ is wrong' meme again. But was it really broken, or did we<br>
just not know how to use it?</div></blockquote></div><br>Personally, I always thought SQ was broken, and said so when we first put it in.</div><div class="gmail_extra"><br></div><div class="gmail_extra">The problem was that SQs were to be sent when a packet was discarded somewhere in transit. So a gateway (router) that had to discard a packet because no buffers were available sent a SQ to the Host that had sent that packet. That's the best it could do since it didn't remember any kind of state information or flows etc. SQ was more accurately an "I dropped your packet, sorry about that" report that was just called Source Quench, launched at some possibly irrelevant user process.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra">Since the gateways had no state information about connections, that SQ could have gone to some Host that really had nothing to do with the excessive traffic that was causing the problem. It could also have gone to some process with a TCP connection that had nothing to do with the congestion. It made little sense for a TCP connection that had just opened, or that was already sending only a little data (a User Telnet) to be told to slow down.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra">Dave Mills figured out that an appropriate response to receiving an SQ was to immediately retransmit, since you knew that your packet had been dropped. This was especially appropriate if your system was hung out on the end of a low speed dialup line, and thus very very unlikely to be sending enough traffic to be causing congestion. This of course did nothing to reduce traffic at all.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra">The problem was that an SQ could easily go to a user process that had nothing to do with the congestion being experienced, and could do nothing useful to alleviate that congestion. SQs of course could also create more congestion themselves.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra">I think it would have been possible to make smarter gateways that remembered a lot about recent traffic flows, and could thereby deduce which ones were causing the problem, directing an SQ to a source that would actually be appropriate to slow down. But that would start to look a lot like a virtual circuit net, where the internal mechanisms knew about flows and connections, rather than a datagram one. We already had the ARPANET with such internal mechanisms. IP was supposed to be different, lean and mean with very simple very fast switches and a mix of TCP and UDP traffic.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra">So, yes we didn't know how to use it, but I think it was also inappropriate for a "datagram network".</div><div class="gmail_extra"><br></div><div class="gmail_extra">
/Jack</div><div class="gmail_extra"><br></div></div>