[ih] booting linux on a 4004
Barbara Denny
b_a_denny at yahoo.com
Thu Oct 10 10:55:19 PDT 2024
Just a FYI
I can see how IPv6 requirements in this area might be problematic for packet radio networks. I will admit my knowledge is old so things may have changed.
I also don't know how the numbers in the IPV6 specification were selected.
barbara
On Thursday, October 10, 2024 at 09:56:26 AM PDT, Barbara Denny via Internet-history <internet-history at elists.isoc.org> wrote:
Reminds me of how much effort went into selecting a packet size and the coding at the link layer in a packet radio environment. If you are interested, I think Mike Pursley (Clemson) might have done the analysis for us (SRI) when we were working on porting the Packet Radio protocols to the SINCGARs radio. I looked recently to see if I could find a writeup but couldn't find anything quickly regarding this particular effort (Our discussion regarding MTU got me thinking on this topic). The web page for him at Clemson does mention his current research is network coding for packet radio networks. :-)
barbara
On Thursday, October 10, 2024 at 05:53:34 AM PDT, Craig Partridge via Internet-history <internet-history at elists.isoc.org> wrote:
Hi Greg:
Thanks for correcting my faulty memory. As partial recompense for being
wrong, I'll note I have a partial set of the end2end-interest archives if
there are questions. As recompense for my error, I offer the following
tidbit:
Posted-Date: Tue, 31 Mar 87 17:58:17 PST
To: Craig Partridge <craig at LOKI.BBN.COM>
Cc: end2end-tf at venera.isi.edu
Subject: Re: Thinking about Congestion
In-Reply-To: Your message of Fri, 27 Mar 87 08:43:19 EST.
Date: Tue, 31 Mar 87 17:58:17 PST
From: Van Jacobson <van at lbl-csam.ARPA>
Craig -
Your note pushed one of my buttons: Sending a lot of data
into a congested network doesn't improve transmit efficiency
any more than disconnecting the collision detect wire on
your ethernet would. Either action makes everyone on the net,
including you, lose.
There is always an optimum window size but computing it requires
knowing how packet loss scales with window size. To first order,
the scaling will be the exponential (1 - A)^W where W is the
window size and A is a network dependent constant (0 < A < 1).
For a long haul net, no-loss throughput will scale with window
size like W/T where T is the round trip time. The effective
throughput will go like the product of these two terms. For
small W the linear term dominates and you see linear throughput
increase with increasing window size. For large W the loss term
dominates and you see exponential throughput decrease with
increasing window size. For small A (low loss rates), the
optimum window size will scale like -1/log(1-a).
It's possible to do a more exact analysis. A few years ago a
friend of mine was working on a tcp/ip implementation for a well
known supercomputer manufacturer. At the time there was a huge
debate in the company on whether to "modify" tcp. It seems that
some cretin in management had decided that the only way to get
good network performance was to do huge transfers, where "huge"
was much larger than the 64K allowed by the tcp window size
field. I was simulating very high performance fiber optic nets
at the time and found this argument to be completely at odds with
my results. I was so incensed that I wrote a little 5 page paper
for my friend titled "Some notes on choosing an optimum transfer
size" that started out:
"The choice of network transfer size seems to have been
driven by the idea that ``bigger is better''. While this
reflects a good, American upbringing, it bears only faint
resemblance to reality. In the unlikely event that a future
decision is made on rational grounds, this note describes the
mathematical basis for choice of window and transfer size."
I'm afraid it went on in much the same tone (I must have been
drunk when I wrote it) but I did summarize how to apply Erlang's
and Hill's loss functions to tcp (the same analysis would apply
to rdp - the only difference is rdp gains a factor of two in
throughput over tcp at very high loss rates). If you're
interested in the math, I'd be glad to send you extracts from
this thing or the references I used.
- Van
On Thu, Oct 10, 2024 at 12:47 AM Greg Skinner <gregskinner0 at icloud.com>
wrote:
>
> On Oct 5, 2024, at 5:42 PM, Craig Partridge <craig at tereschau.net> wrote:
>
>
> As someone who was in touch with Raj/KK and Van/Mike during the
> development of congestion control. They were unaware of each other's work
> until spring of 1988, when they realized they were doing very similar
> stuff. I think, someone (Dave Clark) in the End2End Research Group became
> aware of Raj & KK's work and invited them to come present to an E2E meeting
> in early 1988 and E2E (more than IETF) was where Van was working out the
> kinks in his congestion control work with Mike.
>
> Craig
>
>
> I looked into this a bit, and discovered that Raj/KK and Van/Mike were all
> at the 6th IETF, which took place in April 1987. [1] (It was a joint
> meeting of the IETF and ANSI X3S3.3 Network and Transport Layer standards
> groups.) Both teams presented their work at the meeting.
>
> On Sat, Oct 5, 2024 at 5:34 PM John Day via Internet-history <
> internet-history at elists.isoc.org> wrote:
>
>> The work of Jain’s DEC team existed at the same time and I believe
>> Jacobson’s original paper references it.
>>
>> As I said, at least it does congestion avoidance without causing
>> congestion (unless under extreme conditions).
>>
>> I suspect that the main reason Jacobson didn’t adopt it was that they
>> were trying to maximize the data rate by running as close to congestion
>> collapse as they could. While Jain’s work attempted to balance the
>> trade-off between throughput and response time. But that is just policy
>> they still could have used ECN to keep from being predatory and used ECN
>> while waiting until the queue is full to mark the packets. That is what TCP
>> use of ECN does now. Of course, I think that is bad choice because it
>> generates lots of retransmissions.
>>
>>
> Some of the reasons why Van/Mike took the approach they did were discussed
> in a email message Van sent to the tcp-ip list. It included some
> discussions that had taken place on the ietf and end2end-interest lists.
> [2] IMO, it’s unfortunate that the existing archives of those lists,
> because we would be able to read the points of view expressed by the list
> participants.
>
> When I asked Jain why his wasn’t adopted, he said he isn’t an implementor,
>> but an experimenter.
>>
>> But it is not uncommon to be so focused on the immediate problem to fail
>> to notice the system implications.
>>
>
> John, what could they have done that would have met your criteria and
> yielded a deployable solution to the congestion problems existing at that
> time in the timeframe that it was needed? IMO, their paper should be
> assessed in that context.
>
> --gregbo
>
> [1] https://www.ietf.org/proceedings/06.pdf
> [2] https://ee.lbl.gov/tcp.html
>
>
--
*****
Craig Partridge's email account for professional society activities and
mailing lists.
--
Internet-history mailing list
Internet-history at elists.isoc.org
https://elists.isoc.org/mailman/listinfo/internet-history
--
Internet-history mailing list
Internet-history at elists.isoc.org
https://elists.isoc.org/mailman/listinfo/internet-history
More information about the Internet-history
mailing list