[ih] IPv4 address size debate

Craig Partridge craig at aland.bbn.com
Wed Nov 11 22:46:57 PST 2009


> Once one understands the bigger picture, one realizes that question 
> of variable vs fixed is a non-sequitor. But one does have to get free 
> of the constraints of a Ptolemaic approach to architecture.

Hi John:

I'm afraid I disagree (at the risk of being lumped in the distinguished
company of Ptolemy).

I agree that in much of the networking and distributed systems world, variable
vs. fixed is not a big deal and has all the utility of the binary vs. ASCII
representations debate (i.e. not much).

But, in routers and encrypters and similar boxes that handle large volumes
of data, fixed vs. variable is still a challenge.  The fundamental issue is
that while links work in terms of bits and bytes, processors and memories
actually work in terms of blocks/chunks.  That's because they use parallelism
they use to go fast (and one reason they use parallelism is physics -- prop
times across chip boundaries, etc).

So when writing code for routers that has to go fast, you are constantly
thinking about those blocks and trying to avoid crossing block boundaries
(both in instructions and data accesses) and trying to keep your software
using the minimum number of blocks, as touching an additional block is
a serious performance hit.   Knowing exactly how your data is laid out
is a huge boon here -- it removes the uncertainty of how many blocks you'll
have to touch (and how many instructions you have to execute).

And sizing for the max (assuming the variable address is always max length)
doesn't help either -- because there are two addresses in a header, if the
first one is short then all your plans for the second address are undone.

Upleveling my point -- we have a computing abstraction (bytes) which doesn't
match how computers, when stressed for performance, actually work and that
has implications for packet headers.

Thanks!

Craig



More information about the Internet-history mailing list