[ih] why did CC happen at all?

Detlef Bosau detlef.bosau at web.de
Wed Sep 3 16:19:19 PDT 2014


Am 03.09.2014 um 23:30 schrieb Detlef Bosau:
> But Paul's kind of yelling is simply not acceptable for me.
> Particularly as he did not bring arguments but very personal
> invectives. (And did not really read what I wrote, I never proposed a
> "core based" congestion control scheme, what is "core based"?  So when
> someone think, a statement or a claim of mine were wrong, I would
> appreciated at least being quoted correctly.)
>
>
Perhaps one addition: In his rant, Paul advocated an endpoint controlled
system. This is basically the status quo.

The extreme contrast, and this topic was mentioned in the discussion, is
INTSERV or, with particular respect to complexity, the control theoretic
approach by Srinivasan Keshav.

Vint pointed to the importance of flexibility, which is provided by
packet switching.

However, I'm curious why we always see only these two alternatives:
Either a more or less chaotic system, which the Internet actually is, or
a strictly controlled system like the telephone system.

I sincerely think that the best way is somewhere in between.

Basically, I think this is the very best way to revive Salzer's paper.
Salzer, Reed, Clark don't say that anything must be done at the end points.
But they say basically, things should be done where they belong to.

Concrete example.

For years I was trapped in the idea, retransmissions should not been
done locally but by the endpoints.

This is, in that black and white way of thinking, simply nonsense.


In case of a link failure, a retransmitted packet will fail - no matter
whether sent locally or by the original sender.

However, from a TCP's perspective, we have the network layer between
link layer and transport layer, hence when a link fails, the network
layer may e.g. change the routing, hence when a packet is resent by its
original sender, it will take the corrected route then and eventually
reach its destination.

Where did this trap come from?

Precisely: I did not really consider all alternatives.

Let me stay here for a moment.

What about WiFi? In WiFi, we have NO collision detection, we cannot
distinguish collision from corruption. (This is a very important
difference to mobile networking technologies.) As a consequence, we must
not restrict the number of retransmissions to hard - otherweise a WiFi
net may fail completely under heavy load. And of course, the end points
will not bother with local load in a WiFi segment along the path.

In contrast to that, local retransmissions in mobile networks should be
restricted very hard. When you look at a mobile links block corruption
probability in mobile networks, this drops from 1 to 0 in an SNR range
of about 1 db or so.  There is hardly a real "range" in between. Hence,
either a block is successfully sent in one, at most two, attempts, or
the link (at least in its current configuration) is not suited to convey
the block. It's that simple.

So these two cases should be treated differently. Which actually isn't
done by and end to end approach.

However, I often see papers which restrict themselves to an end to end
view, while others take an INTSERV like perspective.
Both positions exclude approaches which may be somewhere in between.

And I admittedly take the position, that we cannot reasonably  control
the Internet's resource assignment and flow/congestion control only from
the end points.

-- 
------------------------------------------------------------------
Detlef Bosau
Galileistraße 30   
70565 Stuttgart                            Tel.:   +49 711 5208031
                                           mobile: +49 172 6819937
                                           skype:     detlef.bosau
                                           ICQ:          566129673
detlef.bosau at web.de                     http://www.detlef-bosau.de






More information about the Internet-history mailing list