[ih] A revolution in Internet point-of-view - Was Re: Internet analyses (Was Re: IPv8...)

Jack Haverty jack at 3kitty.org
Fri May 1 16:51:28 PDT 2026


No, I was referring to a "waterfall diagram" rather than the "waterfall 
model" of software development.

The diagram showed the interactions as a computer program ran and 
interacted with others over some kind of network.  It looks a bit like a 
PERT chart but isn't one.

An application that used to run locally (in one computer or perhaps on a 
LAN) involved interactions that took microseconds.  When that same 
application operated over a wide area network, such as The Internet, the 
same interactions would take much longer.

The waterfall diagram showed all the interactions with a vertically 
oriented timeline.   Interactions that were organized to happen serially 
were fine in a local environment, but took much longer over remote links.

To create such a diagram in the early days, you would use TCPDUMP (today 
probably wireshark) to capture the interactions happening over your 
LAN.   Then you would pore over them to extract all of the interactions 
- much like the old days poring over listings of "core dumps" to figure 
out why your program crashed.

Then you would draw all the interactions with vertical lines for each 
computer involved, and near-horizontal lines to show the interactions 
between computers.  There could be more than two computers involved.

 From the TCPDUMP data, you could add time information to each 
interaction, and then draw the interaction lines appropriately. Nearly 
horizontal lines indicated very rapid interactions.  Heavily slanted 
lines indicated long times involved.   With such a diagram it was much 
easier to see where the performance bottlenecks were. Sometimes it 
indicated an overloaded server.   But it could also indicate an 
application not designed for a network environment, with lots of serial 
interactions that took a long time over a network.

We used such diagrams for diagnosing performance issues back in the 
1980s, and in the 1990s to diagnose client/server performance.  They 
were also very useful after the web appeared.  Debugbear and others 
offer a diagramming tool for use in today's web applications - see 
https://www.debugbear.com/docs/waterfall

Below is some AI's (sorry, can't tell whose) explanation of a waterfall 
diagram used to diagnose web performance.   But they are also useful in 
any application using a network.

/Jack Haverty

-----------------------------------------------------------------

A networking waterfall diagram is a*2D visual representation of the 
sequence of network events*happening over a timeline during a webpage 
load, showing when each resource (HTML, CSS, JS, images) starts and 
stops loading.


      Key Components of the Diagram

Each bar in the chart represents a specific network request and is 
color-coded to break down the total load time into distinct phases:

  *

    *Blocking/Queue Time*: Time spent waiting in the browser’s queue for
    an available connection or to prioritize other resources.

  *

    *DNS Lookup*: The time taken to resolve the domain name into an IP
    address.

  *

    *Connection Time*: The duration to establish a TCP connection
    (three-way handshake) and SSL/TLS handshake for secure connections.

  *

    *Sending/Request Time*: The time to send the request headers to the
    server.

  *

    *Waiting (TTFB)*: Time to First Byte, representing the wait for the
    server to process the request and begin sending the response.

  *

    *Receiving/Download Time*: The duration to transfer the resource
    data from the server to the client.


      Interpretation and Optimization

  *

    *Sequential vs. Parallel*: The diagram reveals if requests are
    loading sequentially (one after another, creating a "waterfall"
    effect) or in parallel.*HTTP/2*protocols allow multiple resources to
    load simultaneously over a single connection, flattening the
    waterfall and improving performance.

  *

    *Identifying Bottlenecks*: Long bars indicate slow resources, while
    large gaps between the start of one request and the start of the
    next (blocked time) indicate connection limits or render-blocking
    scripts.

  *

    *Optimization Goal*: The primary objective is to*reduce the number
    of steps*(sequential dependencies) and minimize the time spent in
    blocking, DNS, and connection phases, rather than just reducing file
    sizes.




On 5/1/26 16:12, Dave Crocker wrote:
> On 5/1/2026 1:58 PM, Jack Haverty via Internet-history wrote:
>> Does anyone remember the origin
>
> fwiw...
>
>     https://en.wikipedia.org/wiki/Waterfall_model
>
> d/
>
> -- 
> Dave Crocker
>
> dhc at dcrocker.net
> bluesky: @dcrocker.bsky.social
> mast: @dcrocker at mastodon.social
> +1.408.329.0791
>
> Volunteer, Silicon Valley Chapter
> Northern California Coastal Region
> Information & Planning Coordinator
> American Red Cross
> dave.crocker2 at redcross.org

-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 665 bytes
Desc: OpenPGP digital signature
URL: <http://elists.isoc.org/pipermail/internet-history/attachments/20260501/5fe298f2/attachment-0001.asc>


More information about the Internet-history mailing list