[ih] Design choices in SMTP
John Levine
johnl at iecc.com
Tue Feb 7 17:31:04 PST 2023
It appears that Dave Crocker via Internet-history <dcrocker at bbiw.net> said:
>On 2/7/2023 9:26 AM, John Klensin via Internet-history wrote:
>> * SMTP was designed along the model of FTP, which preceded it as the mail
>> transport mechanism, and that made the command-response model seem natural
>
>As I recall, there was discussion about whether the exchange should do a
>transaction for each addressee, versus for the entire set of
>addressees. I believe per-addressee was chosen to greatly simplify
>failure analysis.
>
>If addresses had been done as a single part of the transaction, there's
>the task of figuring out which one(s) were the problem.
It was pretty clever the way pipelinng solved that for free. We still
have the question of what does the server say if it finds after it has
received the message that some of the recipients can accept it and
some can't, but we've lived with that for 40 years.
There were and to some degree still are a lot of assumptions in the
design, like bandwidth is expensive, round-trips are relatively cheap.
When Dan Bernstein wrote qmail in 1998 he not so much challenged as
ignored many of those assumptions. He observed that mail servers
generally have a whole lot of mail in flight, and people care more
about the overall number of messages delivered per minute or per hour
so he tuned it so it could handle lots of incoming and outgoing
connections to fill up the pipe rather than trying to optimize each
connection.
His most contentious decision was that it delivered one message to one
recipient at a time, and if you sent a message to 10 people at a
recipient system, it sent ten copies. A lot of people got bent out of
shape at that (some still are) but in retrospect he was right. These
days most bulk mail systems customize each recipient's message a
little, so it's all single recipient sent in parallel anyway.
R's,
John
More information about the Internet-history
mailing list