[ih] Intel 4004 vs the IMP

Brian E Carpenter brian.e.carpenter at gmail.com
Mon Nov 15 17:11:38 PST 2021


As late as 1973, I believe you still had to go up the Digital
range as far as a PDP-11/45 to get multiple levels of interrupt
priority, so that a high priority interrupt could interrupt
processing of a Teletype interrupt, for example. Honeywell
was well ahead of that game. (As was an IBM 1800, but that had
ten times the footpriint.)

I never touched a 4004, but as far as I can see it only had CPU-
controlled 4-bit I/O ports, with neither interrupts nor DMA. It
would have been a busy little bee trying to do an IMP's job.

Regards
    Brian

On 16-Nov-21 11:54, Jack Haverty via Internet-history wrote:
> True for more modern systems, but in the era of the 316/516, inexpensive
> computers sometimes did I/O in the simplest possible manner.  E.g., to
> handle I/O on a serial interface, the CPU might have to take an
> interrupt on every byte, read that byte from the hardware interface, and
> re-enable the interface quickly enough for it to be ready to handle the
> next byte,   The hardware only buffered a single byte at a time.  The
> CPU also had to do that all fast enough to handle all the streams of
> interrupts from all its interfaces in order not to lose data.   That
> would occur if a particular line received a byte and raised its
> interrupt, but the processor was too busy handling other interrupts and
> didn't get to that one before the next character had assived on the
> serial line. It got worse of course as line sppeds were increased.
> 
> That's how a PDP-8/I I worked with in 1968 worked.   I think that kind
> of issue is the one Alex referred to about selecting the 316 because of
> its interrupt mechanism.   Since the IMP was essentially a multi-port
> I/O handler, how the hardware handled I/O on all those interfaces was a
> crucial factor in selecting the 516.   That's why I suggested 
that the
> I/O capabilities of a microprocessor needed to be considered when trying
> to figure out how it compared to the 516, more so than just classic
> metrics like raw memory and CPU speed. About ten years ago I dug into an
> early version of the IMP code to figure out how it worked.   The main
> CPU was essentially an interrupt processor, waiting in an idle loop for
> an interrupt to occur and then handling it fast enough to avoid not
> getting to the next interrupt fast enough to avoid losing any data.
> 
> As machines matured and costs dropped, hardware interfaces became
> smarter and could process large chunks of data for each interrupt.
> Essentially the interface contained a "co-processor" that offloaded the
> main CPU.  I don't recall how the earliest micros handled interrupts and
> I/O, but that's why it's important to look at the I/O capabilities for
> Steve's question.
> 
> /Jack Haverty
> 
> 
> On 11/15/21 1:53 PM, Noel Chiappa via Internet-history wrote:
>>       > From: Jack Haverty
>>
>>       > IIRC, another of the important criteria for selecting the Honeywell 516
>>       > was the I/O performance characteristics
>>       > ...
>>       > So in looking for the earliest "comparable" microprocessor, in 
addition
>>       > to comparing metrics such as CPU speed and memory, I think you 
have to
>>       > look at I/O characteristics
>>
>> Yes, but... in a _router_, the CPU didn't need to _ever_ look at most of the
>> data in the packet. In anything that did TCP, yeah, you had to do the
>> checksum, and that almost always needed the CPU to fondle each byte, but in
>> e.g. the CGW, since it _never_ copied packets around (not even e.g. to 
make
>> room for a longet physical network header on the front), if a packet came in
>> one DMA interface, and out another, the CPU never even saw most of thwe bytes
>> in the packet, so the CPU speed was not too relevant, it was all bus-bandwidth
>> dependant.
>>
>> Early on, not all network interfaces were DMA; there were three different
>> approaches:
>>
>>    - DMA
>>    - full packet buffers in the interface, but the CPU had to manually 
move
>>      bytes from the interface to buffers in memory, so 3 bus cycles/word (with
>>      unrolled loop with pointers to device and buffer in registers i)
>>      instruction fetch, ii) read from interface, iii) write to memory) 
so 3
>>      times as much bus traffic per word, compared to DMA
>>    - interrupt per word
>>
>> But even the latter wasn't _necessarily_ a problem; at MIT, the main ARPANET
>> gateway for quite a while used the Stanford/SRI 1822 Interface:
>>
>>     https://gunkies.org/wiki/Stanford_1822_Interface
>>
>> which was interrupt/byte, but peformance wasn't a problem (that I recall).
>>
>> Like I said, for all early routers, performance was not an issue. Performance
>> only became an issue when there were multiple COTS router vendors, when
>> performance became an easy way for their marketing people to distinguish their
>> products from those of competitors. I doubt the users could ever have told.
>>
>>
>> I don't know if _early_ microprocessors (i.e. long before Motorola 68Ks, Intel
>> x86's, etc) supported DMA on theit memory busses; even for those that didn't,
>> it might have been possible to build an external bus controller, and either
>> stall the CPU (if the memory bus was busy doing DMA), or build a multi-port
>> main memory, or something.
>>
>>        Noel
> 
> 




More information about the Internet-history mailing list