[ih] Loss as a congestion signal [internet-history Digest, Vol 84, Issue 4]
Detlef Bosau
detlef.bosau at web.de
Thu Jun 5 18:22:41 PDT 2014
Am 04.06.2014 04:41, schrieb Noel Chiappa:
>
>
> First, start with the point that the endpoint _pretty much_ _has_ to have the
> mechanism to recognize that a packet has been lost, and retransmit it - no
> matter what the rest of the design looks like.
>
> Why? Because otherwise, the network has to never, ever, lose data - because
> if, once the host has sent a packet, it cannot reliably notice that it has
> been lost, and re-send it, the network cannot lose that packet.
Absolutely agreed.
>
> That means the network has to be a lot more complex: switches have to have a
> lot of state, they have to have their own mechanism for doing
> acknowledgements - since an upstream switch cannot discard its copy of a
> packet until the downstream has definitely gotten a copy - and the upstream
> has to hold the packet until the downstream acks, etc. etc.
It is a trade off here. My own way of thinking through the last 10 years
was to "binary".
I considered local retransmissions XOR transport layer retransmissions.
This is oversimplified.
Depending on the link, there may be alternative channel codes, in mobile
networks alternative paths.....
The challenge is to make the right choice out of the feasible ones here.
>
> (In fact, you wind up with something that looks a lot like the ARPANET.)
Perhaps. In Ethernet, TR, FDDI, ATM etc. we hardly have a retransmission
layer.
In wireless networks, we do. In ALL (terrestrial) technologies I now. In
satellite networks, we generally prefer retransmission free FEC schemes.
>
> And even if the design has all that mechanism/state/complexity built in, it's
> _still_ not really guaranteed: what happens if the switch with the only copy
> of a packet dies? (Unless the design adopts the rule that there must always be
> _two_ switches with copies of a packet - even more complexity/etc.)
you got me wrong. I don't talk about splitting or ACK spoofing.
But if e.g. a WiFi plant changes from 16QAM to QPSK to accommodate
noise, this choice is made locally. A KA9Q stack from 1991 will perhaps
not even know the difference. (Perhaps Craig will correct me here? Or
Phil Karn himself?)
>
> There are good architectural reasons why the endpoint is given the ultimate
> responsibility for making sure the data gets through: For one, it's really
> not possible to get someone else to do the job as well as the endpoint can
> (see above). This is fate-sharing / the end-end principle.
Again, you got me wrong. The endpoint is responsible for making sure the
eventual delivery.
The question is how this is achieved. Once a sender got an ACK for a
packet, it can remove the packet from the queue.
NOT EARLIER!!!! And certainly this includes a packet retransmission when
a packet is not acked on time. However it may include e.g. rerouting
- and of course: a socket may cancel a flow if necessary and inform the
application accordingly.
>
> For another, once the design does that, the switches become a _lot_ simpler -
> an additional benefit. When you see things start to line up that way, it's
> probably a sign that you have found what Erdos would have called 'the design
> in The Book'.
>
>
> So, now that the design _has_ to have end-end retransmission, adding any other
> kind of re-transmission is - necessarily - just an optimization.
My ideas on a flow layer are still quite rough. However, in that case
the discussion becomes a lot simpler as I think with a flow layer we can
avoid at least congestion related drops. Afterwards the discussion will
be different from the current one. At the moment, we use to deal with
drops, corruption is a rare exception at least in wired networks. If we
could avoid drops, we would deal with corruption only.
>
> And to decide whether an optimization is worth it, one has to look at a
> number of aspects: how much complexity it adds, how much it improves the
> peformance, etc, etc.
Agreed. It is a trade off which must be properly assessed.
>
> I understand your feeling that 'doing the retransmission on an end-end basis
> wastes resources', but... doing local retransmission _as well_ as end-end
> retransmission (which one _has_ to have - see above) is going to make things
> more complicated - perhaps _significantly_ more complicated. Just how much
> more, depends on exactly what is done, and how.
And exactly that's the discussion from Jerry Salzer's paper. I don't see
it as an argument to make end to end retransmission mandatory by all
means but to carefully place retransmission at the correct place.
>
> E.g. does the mechanism only re-send packets when a damaged packet is
> received at the down-stream, and the packet is not too badly damaged for the
> down-stream to figure out which packet was lost, and ask for a
> re-transmission from the up-stream? This is less complex than the down-stream
> acking every packet, _but_ ... the up-stream _still_ has to keep a copy of
> every packet (in case it needs to re-transmit) - and how does it decide when
> to discard it (since there is no ack)? Add acks, and the up-stream knows for
> sure when it can ditch its copy - but now there's a lot more complexity,
> state, etc.
>
> Doing local re-transmission is a _lot_ more complexity (and probably limits
> performance) - and does it buy you enough to make it worth it? Probably not...
>
In wireless networks, the decision is made.
With an interesting consequence: The service times vary a lot. And so
does the throuhput. And so does the throughput-delay product,
iow: the "path capacity".
--
------------------------------------------------------------------
Detlef Bosau
Galileistraße 30
70565 Stuttgart Tel.: +49 711 5208031
mobile: +49 172 6819937
skype: detlef.bosau
ICQ: 566129673
detlef.bosau at web.de http://www.detlef-bosau.de
More information about the Internet-history
mailing list