[ih] Dotted decimal notation

Vint Cerf vint at google.com
Mon Dec 28 16:58:11 PST 2020


i believe michael is correct.

v


On Mon, Dec 28, 2020 at 7:51 PM Michael Greenwald via Internet-history <
internet-history at elists.isoc.org> wrote:

> On 2020-12-28 16:35, Brian E Carpenter via Internet-history wrote:
> > Thanks for the various replies. I wasn't there, but clearly some magic
> > happened between the 8-bit network numbers in RFC776 (January 1981) and
> > the emergence of Class A, B, C addressing in RFC790 (September 1981),
> > and that called for some new notation such as dotted decimal.
>
> For what it's worth, I am fairly certain that I was
> parsing dotted decimal addrs (10.0.0.6) "long" before
> class A, B, or C addresses existed. The rough description
> that I got from Dave Clark in 79-ish was that the
> first byte was the network, and the remaining 24 bits
> were structured as subnet/host in some network-specific
> way (8 bit imp/... n bit host, or 16 bit subnet/8 bit host,
> or whatever). And for debugging/tracing on multics, I
> was definitely printing IP addrs as 4 dotted decimal
> numbers, even though I really, really, didn't like them.
> So I assume that the dotted-decimal notation was in use
> in more than one place, already, by 79.
>
> >
> > That magic is not well documented in the RFC series, but in IEN175,
> > reporting on a January 1981 meeting, we find that
> >       "Vint Cerf led a further discussion on addressing.  The main
> > focus
> >       was on the tradeoff between a flat address space and a
> >       hierarchical one...
> >       Vint suggests that we have both in one!  Let an address be
> >       composed of two parts: a hierarchical address (called an address)
> >       and a flat address (called an identifier)."
> >
> > I guess that became Class A, B, C by September, via IEN177, but it also
> > accurately describes IPv6 addressing.
> >
> > Regards
> >    Brian Carpenter
> >
> > On 29-Dec-20 10:54, Jack Haverty via Internet-history wrote:
> >> IIRC, this convention was created over time by a coalescence of "rough
> >> consensus and running code" as the early user programs (Telnet and
> >> FTP)
> >> were being rewritten to use TCP instead of NCP, so it would have been
> >> during the late 70s.   On the ARPANET, e.g., when a particular Telnet,
> >> you would type "O <host>/<imp>", e.g., 1/6 to connect to MIT-DMS, host
> >> 1
> >> on IMP 6, or "O 70" which was the equivalent.   Something new was
> >> needed
> >> for specifying 32-bit IP addresses.
> >>
> >> Dotted quad was one early approach, where the 4 numbers could be
> >> either
> >> octal, if they had a leading zero, and otherwise decimal.
> >> A leading 0 indicated that the number was octal - also a common
> >> convention in programming languages at the time.
> >>
> >> The "dotted decimal" convention evolved from the "dotted quad", with
> >> the
> >> difference being that the numbers in the "...decimal" form were of
> >> course always decimal, regardless of the presence of a leading zero.
> >>
> >> I believe all of these forms were created as various people wrote user
> >> programs.  The notation is really a design decision of the user
> >> interface, converting typed IP addresses into the appropriate 32-bit
> >> fields for the underlying TCP code.
> >>
> >> Some people liked decimal numbers, others liked octal.
> >>
> >> One particularly irritating choice was pure decimal, i.e., a 32-bit
> >> number represented in decimal (no dotted quad).   The early SRI TIU
> >> (terminal concentrator) required the user to input decimal numbers,
> >> which were annoyingly difficult to calculate.    E.g., 10.0.0.5,
> >> easily
> >> recognized as Host 0 on ARPANET IMP 5, had to be typed in its 32-bit
> >> decimal format when specifying what remote computer the user wanted to
> >> access.  It was difficult to do such calculations in your head; I
> >> remember pulling out a calculator to create the appropriate many-digit
> >> decimal number.
> >>
> >> Eventually the "dotted quad" notation reached rough consensus and many
> >> host implementations of user apps (Telnet, FTP) permitted that form of
> >> specifying a target host.
> >>
> >> The "dotted decimal" convention eventually superceded the "dotted
> >> quad"
> >> notation because the quad form was often confusing.
> >>
> >> E.g., "ISIF in dotted decimal is 010.002.000.052, or 10.2.0.52", where
> >> leading zeroes are ignored.  But in dotted quad,
> >> 010.002.000.052 and 10.2.0.52 would not be equivalent.  010 would be
> >> network 8 rather than 10, and 052 would be 42 instead of 52.
> >>
> >> I don't remember who first produced dotted decimal though.   I think
> >> you'd have to look at the applications programs of the time (FTP,
> >> Telnet) to see what each used for its UI.
> >>
> >> /Jack
> >>
> >>
> >> On 12/28/20 12:55 PM, Brian E Carpenter via Internet-history wrote:
> >>> Can anyone recall when and by whom the dotted decimal notation for
> >>> IPv4
> >>> addresses was invented? This text first appeared in RFC820 (January
> >>> 1983):
> >>>
> >>>    One commonly used notation for internet host addresses divides the
> >>>    32-bit address into four 8-bit fields and specifies the value of
> >>> each
> >>>    field as a decimal number with the fields separated by periods.
> >>> This
> >>>    is called the "dotted decimal" notation.  For example, the
> >>> internet
> >>>    address of ISIF in dotted decimal is 010.002.000.052, or
> >>> 10.2.0.52.
> >>>
> >>> The leading zeroes are not considered valid these days.
> >>>
> >>> Thanks
> >>>    Brian Carpenter
> >>
> --
> Internet-history mailing list
> Internet-history at elists.isoc.org
> https://elists.isoc.org/mailman/listinfo/internet-history
>


-- 
Please send any postal/overnight deliveries to:
Vint Cerf
1435 Woodhurst Blvd
McLean, VA 22102
703-448-0965

until further notice


More information about the Internet-history mailing list