[ih] IPv4 address size debate
Vint Cerf
vint at google.com
Fri Nov 13 03:13:49 PST 2009
that's a red herring. by the time IP and TCP dealt with the headers,
the ethernet portion was stripped away.
v
On Nov 12, 2009, at 2:04 PM, Richard Bennett wrote:
> I remember when handling packets at wirespeed was a challenge, but
> that was solved by hardware. The 48-bit EtherMac address was a much
> bigger issue than IP addresses, and the size of the Ethernet header
> (112 bits) guaranteed that the IP header wasn't going to be 32-bit
> aligned anyhow.
>
> John Day wrote:
>> You missed the point of my comment. I am well aware of the coding
>> issues. Although, Oran and others have always argued that variable
>> was not a big deal in hardware.
>>
>> The point was that if you think in terms of a relative
>> architecture, rather than the traditional fixed flat architecture,
>> fixed is variable, or was that variable is fixed? ;-)
>>
>> I was implying that fixed was really all that was necessary, if you
>> really understood the inherent structure. But then you knew that,
>> didn't you?
>>
>> Take care,
>> John
>>
>> At 1:46 -0500 2009/11/12, Craig Partridge wrote:
>>> > Once one understands the bigger picture, one realizes that
>>> question
>>>> of variable vs fixed is a non-sequitor. But one does have to get
>>>> free
>>>> of the constraints of a Ptolemaic approach to architecture.
>>>
>>> Hi John:
>>>
>>> I'm afraid I disagree (at the risk of being lumped in the
>>> distinguished
>>> company of Ptolemy).
>>>
>>> I agree that in much of the networking and distributed systems
>>> world, variable
>>> vs. fixed is not a big deal and has all the utility of the binary
>>> vs. ASCII
>>> representations debate (i.e. not much).
>>>
>>> But, in routers and encrypters and similar boxes that handle large
>>> volumes
>>> of data, fixed vs. variable is still a challenge. The fundamental
>>> issue is
>>> that while links work in terms of bits and bytes, processors and
>>> memories
>>> actually work in terms of blocks/chunks. That's because they use
>>> parallelism
>>> they use to go fast (and one reason they use parallelism is
>>> physics -- prop
>>> times across chip boundaries, etc).
>>>
>>> So when writing code for routers that has to go fast, you are
>>> constantly
>>> thinking about those blocks and trying to avoid crossing block
>>> boundaries
>>> (both in instructions and data accesses) and trying to keep your
>>> software
>>> using the minimum number of blocks, as touching an additional
>>> block is
>>> a serious performance hit. Knowing exactly how your data is laid
>>> out
>>> is a huge boon here -- it removes the uncertainty of how many
>>> blocks you'll
>>> have to touch (and how many instructions you have to execute).
>>>
>>> And sizing for the max (assuming the variable address is always
>>> max length)
>>> doesn't help either -- because there are two addresses in a
>>> header, if the
>>> first one is short then all your plans for the second address are
>>> undone.
>>>
>>> Upleveling my point -- we have a computing abstraction (bytes)
>>> which doesn't
>>> match how computers, when stressed for performance, actually work
>>> and that
>>> has implications for packet headers.
>>>
>>> Thanks!
>>>
>>> Craig
>>
>
> --
> Richard Bennett
> Research Fellow
> Information Technology and Innovation Foundation
> Washington, DC
>
More information about the Internet-history
mailing list