[ih] Dotted decimal notation

John Day jeanjour at comcast.net
Mon Dec 28 14:44:15 PST 2020


Yea, DEC liked octal.  But it would have been so much better if they had settled on Hex. Would have made creating subnet masks much easier!  ;-)

> On Dec 28, 2020, at 16:54, Jack Haverty via Internet-history <internet-history at elists.isoc.org> wrote:
> 
> IIRC, this convention was created over time by a coalescence of "rough
> consensus and running code" as the early user programs (Telnet and FTP)
> were being rewritten to use TCP instead of NCP, so it would have been
> during the late 70s.   On the ARPANET, e.g., when a particular Telnet,
> you would type "O <host>/<imp>", e.g., 1/6 to connect to MIT-DMS, host 1
> on IMP 6, or "O 70" which was the equivalent.   Something new was needed
> for specifying 32-bit IP addresses.
> 
> Dotted quad was one early approach, where the 4 numbers could be either
> octal, if they had a leading zero, and otherwise decimal. 
> A leading 0 indicated that the number was octal - also a common
> convention in programming languages at the time.
> 
> The "dotted decimal" convention evolved from the "dotted quad", with the
> difference being that the numbers in the "...decimal" form were of
> course always decimal, regardless of the presence of a leading zero.  
> 
> I believe all of these forms were created as various people wrote user
> programs.  The notation is really a design decision of the user
> interface, converting typed IP addresses into the appropriate 32-bit
> fields for the underlying TCP code.
> 
> Some people liked decimal numbers, others liked octal. 
> 
> One particularly irritating choice was pure decimal, i.e., a 32-bit
> number represented in decimal (no dotted quad).   The early SRI TIU
> (terminal concentrator) required the user to input decimal numbers,
> which were annoyingly difficult to calculate.    E.g., 10.0.0.5, easily
> recognized as Host 0 on ARPANET IMP 5, had to be typed in its 32-bit
> decimal format when specifying what remote computer the user wanted to
> access.  It was difficult to do such calculations in your head; I
> remember pulling out a calculator to create the appropriate many-digit
> decimal number.
> 
> Eventually the "dotted quad" notation reached rough consensus and many
> host implementations of user apps (Telnet, FTP) permitted that form of
> specifying a target host.
> 
> The "dotted decimal" convention eventually superceded the "dotted quad"
> notation because the quad form was often confusing. 
> 
> E.g., "ISIF in dotted decimal is 010.002.000.052, or 10.2.0.52", where
> leading zeroes are ignored.  But in dotted quad,
> 010.002.000.052 and 10.2.0.52 would not be equivalent.  010 would be
> network 8 rather than 10, and 052 would be 42 instead of 52.
> 
> I don't remember who first produced dotted decimal though.   I think
> you'd have to look at the applications programs of the time (FTP,
> Telnet) to see what each used for its UI.
> 
> /Jack
> 
> 
> On 12/28/20 12:55 PM, Brian E Carpenter via Internet-history wrote:
>> Can anyone recall when and by whom the dotted decimal notation for IPv4
>> addresses was invented? This text first appeared in RFC820 (January 1983):
>> 
>>   One commonly used notation for internet host addresses divides the
>>   32-bit address into four 8-bit fields and specifies the value of each
>>   field as a decimal number with the fields separated by periods.  This
>>   is called the "dotted decimal" notation.  For example, the internet
>>   address of ISIF in dotted decimal is 010.002.000.052, or 10.2.0.52.
>> 
>> The leading zeroes are not considered valid these days.
>> 
>> Thanks 
>>   Brian Carpenter
> 
> -- 
> Internet-history mailing list
> Internet-history at elists.isoc.org
> https://elists.isoc.org/mailman/listinfo/internet-history




More information about the Internet-history mailing list