From stevenehrbar at elp.rr.com Tue Oct 6 02:26:03 2009 From: stevenehrbar at elp.rr.com (Steven Ehrbar) Date: Tue, 06 Oct 2009 03:26:03 -0600 Subject: [ih] What is the origin of the private network address 192.168.*.*? In-Reply-To: <20090910193341.GA28251@nic.fr> References: <20090910193341.GA28251@nic.fr> Message-ID: <4ACB0D2B.2000400@elp.rr.com> Stephane Bortzmeyer wrote: > Is there a real historical reason for that particular choice of numbers? Why not 127.127..? Or 128.128..? This is a fuzzy recollection of something I believe I read, which might well be inaccurate, and for which I can find no corroboration. I mention it solely because it might spark memories from someone who actually knows: A company used 192.168.x.x example addresses in some early documentation. A number of people followed the manual literally when setting up their internal networks. As a result, it was already being used on a rather large number of private networks anyway, so it was selected when the RFC 1597 was adopted. From randy at psg.com Tue Oct 6 03:05:27 2009 From: randy at psg.com (Randy Bush) Date: Tue, 06 Oct 2009 03:05:27 -0700 Subject: [ih] What is the origin of the private network address 192.168.*.*? In-Reply-To: <4ACB0D2B.2000400@elp.rr.com> References: <20090910193341.GA28251@nic.fr> <4ACB0D2B.2000400@elp.rr.com> Message-ID: > A company used 192.168.x.x example addresses in some early > documentation sun From lyndon at orthanc.ca Tue Oct 6 10:24:26 2009 From: lyndon at orthanc.ca (Lyndon Nerenberg - VE6BBM/VE7TFX) Date: Tue, 6 Oct 2009 11:24:26 -0600 Subject: [ih] What is the origin of the private network In-Reply-To: Message-ID: >> A company used 192.168.x.x example addresses in some early >> documentation > > sun Wasn't 192.9.200.x Sun's example network? From randy at psg.com Tue Oct 6 10:40:20 2009 From: randy at psg.com (Randy Bush) Date: Tue, 06 Oct 2009 10:40:20 -0700 Subject: [ih] What is the origin of the private network In-Reply-To: References: Message-ID: >>> A company used 192.168.x.x example addresses in some early >>> documentation >> sun > Wasn't 192.9.200.x Sun's example network? of course you are correct. sorry. jet lag and not enough coffee. From ubicconf at yahoo.com Tue Oct 13 01:57:20 2009 From: ubicconf at yahoo.com (Ubic Conf) Date: Tue, 13 Oct 2009 01:57:20 -0700 (PDT) Subject: [ih] The First International workshop on Communications Security & Information Assurance (CSIA) - Call for paper Message-ID: <47514.72956.qm@web113013.mail.gq1.yahoo.com> The First International workshop on Communications Security & Information Assurance (CSIA) http://airccse.org/csia/csia (In Conjunction with WiMo - 2010) 26 ~ 28 June, 2010 Ankara, Turkey Call for Paper The workshop focuses on all technical and practical aspects of Communications Security & Information Assurance (CSIA) Security and its applications for wired and wireless networks. The goal of this workshop is to bring together researchers and practitioners from academia and industry to focus on understanding Modern security threats and countermeasures, and establishing new collaborations in these areas. Authors are solicited to contribute to the workshop by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Communication Security & Information assurance ??????????? Access control, Anonymity, Audit and audit reduction & Authentication and authorization ??????????? Applied cryptography, Cryptanalysis, Digital Signatures ??????????? Biometric security ??????????? Boundary control devices ??????????? Certification and accreditation ??????????? Cross-layer design for security ??????????? Data and system integrity, Database security ??????????? Defensive information warfare ??????????? Denial of service protection, Intrusion Detection, Anti-malware ??????????? Distributed systems security ??????????? Electronic commerce ??????????? E-mail security, Spam, Phishing, E-mail fraud, Virus, worms, Trojon Protection ??????????? Grid security ??????????? Information hiding and watermarking & Information survivability ??????????? Insider threat protection, Integrity ??????????? Intellectual property protection ??????????? Internet/Intranet Security ??????????? Key management and key recovery ??????????? Language-based security ??????????? Mobile and wireless security ??????????? Mobile, Ad Hoc and Sensor Network Security ??????????? Monitoring and surveillance ??????????? Multimedia security ,Operating system security, Peer-to-peer security ??????????? Performance Evaluations of Protocols & Security Application ??????????? Privacy and data protection ??????????? Product evaluation criteria and compliance ??????????? Risk evaluation and security certification ??????????? Risk/vulnerability assessment ??????????? Security & Network Management ??????????? Security and Assurance in Operational, Technological, commercial Area ??????????? Security Engineering and Its Application ??????????? Security Models & protocols ??????????? Security threats like DDoS, MiM, Session Hijacking, Replay attack etc & countermeasures ??????????? Trusted computing ??????????? Ubiquitous Computing Security ??????????? Virtualization security, VoIP security, Web 2.0 security Authors are invited to submit papers for the workshop through E-mail (csia.workshop at yahoo.com or csia2010 at airccse.org) by December 10, 2009. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this workshop. The proceedings of the conference will be published by Springer (LNCS) in Communications in Computer and Information Science (CCIS) Series(Confirmed). Selected papers from CSIA-2010, after further revisions, will be published in the special issues of the following international journals. Selected papers from CSIA 2010, after further revisions, will be published in the special issue of an International Journal (Pending) Important Dates Submission Deadlines: 10 December 2009 Paper Status Notification: 25 February 2010 Camera-ready Due: 25 March 2010 ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbaer at cs.tu-berlin.de Mon Oct 26 07:38:16 2009 From: mbaer at cs.tu-berlin.de (=?ISO-8859-1?Q?Matthias_B=E4rwolff?=) Date: Mon, 26 Oct 2009 15:38:16 +0100 Subject: [ih] lack of service guarantees in the internet meaning that it cannot ever "fail" Message-ID: <4AE5B458.8050702@cs.tu-berlin.de> Hi, has anyone here ever seen a source with an explicit elaboration of the notion that if a subnetwork (or the Internet) makes no guarantees about its service, then it cannot logically ever fail. (Accordingly, TCP can be made to never time out even if civilization around it has dawned for good.) Just a minor point, but a somewhat intriguing to me. Let me know if this is just stupid thinking of mine, or if others have ever elaborated on this, or if you have any thoughts or recollections. Best, Matthias -- Matthias B?rwolff www.b?rwolff.de From dhc2 at dcrocker.net Tue Oct 27 07:12:38 2009 From: dhc2 at dcrocker.net (Dave CROCKER) Date: Tue, 27 Oct 2009 07:12:38 -0700 Subject: [ih] lack of service guarantees in the internet meaning that it cannot ever "fail" In-Reply-To: <4AE5B458.8050702@cs.tu-berlin.de> References: <4AE5B458.8050702@cs.tu-berlin.de> Message-ID: <4AE6FFD6.3080000@dcrocker.net> Matthias B?rwolff wrote: > (Accordingly, TCP can > be made to never time out even if civilization around it has dawned for > good.) A classic story from the early days of TCP was about Bob Braden, of USC-ISI, working at UCL in London for awhile, connecting back to ISI over a satellite link. The satellite link went away and Bob got tired of waiting, so he went off to dinner. He came back some hours later and the link had been restored, and TCP casually continued with the existing session. Timeouts for TCP are implementation artifacts, not protocol features. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From faber at ISI.EDU Tue Oct 27 20:04:11 2009 From: faber at ISI.EDU (Ted Faber) Date: Tue, 27 Oct 2009 20:04:11 -0700 Subject: [ih] lack of service guarantees in the internet meaning that it cannot ever "fail" In-Reply-To: <4AE6FFD6.3080000@dcrocker.net> References: <4AE5B458.8050702@cs.tu-berlin.de> <4AE6FFD6.3080000@dcrocker.net> Message-ID: <20091028030411.GG1833@zod.isi.edu> On Tue, Oct 27, 2009 at 07:12:38AM -0700, Dave CROCKER wrote: > Timeouts for TCP are implementation artifacts, not protocol features. To a first approximation and in context, I agree with you. It's worth noting that the TCP specification (I'm speaking of the RFC793 interface, not the myriad socket interfaces, because I hope the standard more clearly refelects the protocol design) does allow a user to set an optional timeout on each data buffer passed to TCP for transmission. It seems clear that the protocol was designed to allow developers access to the TCP timeout and retransmission information in this limited way when it was helpful to them in implementing their application. The base protocol has few extraneous knobs, so I assume this was carefully thought out. I agree completely with your implication that TCP designers went out of their way to avoid requirements that implementations must break connections after a certain time had passed unless specifically requested to do so by a user. Robustness seems to be more important than service guarantees, though they were not neglected. -- Ted Faber http://www.isi.edu/~faber PGP: http://www.isi.edu/~faber/pubkeys.asc Unexpected attachment on this mail? See http://www.isi.edu/~faber/FAQ.html#SIG -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 196 bytes Desc: not available URL: From jack at 3kitty.org Tue Oct 27 23:39:56 2009 From: jack at 3kitty.org (Jack Haverty) Date: Tue, 27 Oct 2009 23:39:56 -0700 Subject: [ih] lack of service guarantees in the internet meaning that it cannot ever "fail" In-Reply-To: <20091028030411.GG1833@zod.isi.edu> References: <4AE5B458.8050702@cs.tu-berlin.de> <4AE6FFD6.3080000@dcrocker.net> <20091028030411.GG1833@zod.isi.edu> Message-ID: <1256711996.3370.122.camel@localhost> On Tue, 2009-10-27 at 20:04 -0700, Ted Faber wrote: > It > seems clear that the protocol was designed to allow developers access > to > the TCP timeout and retransmission information in this limited way > when > it was helpful to them in implementing their application. The base > protocol has few extraneous knobs, so I assume this was carefully > thought out. Well, I guess I have some legitimate claim to being one of the designers, since I implemented the first PDP-11 Unix TCP and was involved in the ongoing TCP and Internet WG meetings and ICCB/IAB way back when. Ted's analysis is right on target. There was considerable thought and discussion given to timeouts and timing in general. The specific assumptions we made about timing (e.g., the maximum lifetime of a packet as it wandered around the net) had direct impact on design decisions such as the size of the sequence space, and therefore the number of bits in the various fields in the TCP and IP headers. It also drove the requirement for use of a random number generator to pick an initial value for the TCP sequence number - if a machine rebooted before all of its "old" packets were flushed from the net, much confusion could otherwise result. TCP/IP was designed to function in a military environment, where nasty things could happen to the net - partitioning, destruction of nodes, etc. The design criteria was that the data should get through if at all possible, no matter how long it took. Networks might go away and be "reconstituted" as new hardware was deployed or moved into range. The implication of this on the TCP/IP machinery was that it should never give up - things might get better. Connections would be broken only if the "other end" responded and performed a close or abort sequence (RST, FIN, et al). If the other end didn't respond, the TCP/IP was to continue trying, forever, with an increasing length of time between retransmissions to avoid flooding the net. The design principle was that only the higher-level application could legitimately decide that it was no longer worth trying to communicate, possibly because whatever it was trying to do was no longer relevant, or because it had additional knowledge that further attempts were futile, or because it had found an alternative way to accomplish its tasks. In order to help the application make the decision whether to keep trying or abort, the TCP/IP implementation was supposed to make information about the connection behavior available to the application - e.g., tell the application that it hadn't heard anything from the other side for a while (the "timeout"), or that the other side was responsive, but was not willing to take any more data (the TCP window was closed). The TCP/IP software was never supposed to close a connection because of a timeout - it would only close a connection on instructions from the application that opened the connection, or on word from the other end to close/reset. These kinds of timing considerations also caused the protocol itself to change as we got some field experience. Early TCP protocol specs have several fewer states in the state machine than the final TCPV4 because of that. The protocol used "across the wire" between conversing TCP/IP implementations was well specified. However, if I remember correctly, the interface ("API") between a TCP/IP and it's user application was only specified as an example. There were simply too many different kinds of operating systems and computer environments around at that time to define a standard API. IBM 360s didn't look like Unix which didn't look like Multics which didn't look like Tenex which really didn't look at all like the Packet Radio OS (I forget it's name...). DOS, Windows, and Macs weren't even around yet. It was left to each implementation designer to provide an appropriate API which best fit into their particular machine environment. Personally, the PDP-11/40 Unix environment was so limited (32K memory - that's K, not M or G), that there wasn't a lot of room for anything fancy in my API. Unfortunately, I think that lack of specific standard API resulted in some TCP/IP implementations that did not provide the intended kind of information and control through the API, or that decided on their own to abort a connection that "timed out". The TCP/IP "service specification" was something like - "Keep trying until hell freezes over." So, getting back to the original question - yes, the Internet couldn't "fail" as long as it kept trying. HTH, /Jack Haverty Point Arena, CA From vint at google.com Wed Oct 28 06:14:37 2009 From: vint at google.com (Vint Cerf) Date: Wed, 28 Oct 2009 09:14:37 -0400 Subject: [ih] lack of service guarantees in the internet meaning that it cannot ever "fail" In-Reply-To: <1256711996.3370.122.camel@localhost> References: <4AE5B458.8050702@cs.tu-berlin.de> <4AE6FFD6.3080000@dcrocker.net> <20091028030411.GG1833@zod.isi.edu> <1256711996.3370.122.camel@localhost> Message-ID: <67BB1B5B-CF8E-4F33-AB3B-E815E685337A@google.com> +1 vint On Oct 28, 2009, at 2:39 AM, Jack Haverty wrote: > On Tue, 2009-10-27 at 20:04 -0700, Ted Faber wrote: >> It >> seems clear that the protocol was designed to allow developers access >> to >> the TCP timeout and retransmission information in this limited way >> when >> it was helpful to them in implementing their application. The base >> protocol has few extraneous knobs, so I assume this was carefully >> thought out. > > Well, I guess I have some legitimate claim to being one of the > designers, since I implemented the first PDP-11 Unix TCP and was > involved in the ongoing TCP and Internet WG meetings and ICCB/IAB way > back when. > > Ted's analysis is right on target. There was considerable thought and > discussion given to timeouts and timing in general. The specific > assumptions we made about timing (e.g., the maximum lifetime of a > packet > as it wandered around the net) had direct impact on design decisions > such as the size of the sequence space, and therefore the number of > bits > in the various fields in the TCP and IP headers. It also drove the > requirement for use of a random number generator to pick an initial > value for the TCP sequence number - if a machine rebooted before all > of > its "old" packets were flushed from the net, much confusion could > otherwise result. > > TCP/IP was designed to function in a military environment, where nasty > things could happen to the net - partitioning, destruction of nodes, > etc. The design criteria was that the data should get through if at > all > possible, no matter how long it took. Networks might go away and be > "reconstituted" as new hardware was deployed or moved into range. > > The implication of this on the TCP/IP machinery was that it should > never > give up - things might get better. Connections would be broken only > if > the "other end" responded and performed a close or abort sequence > (RST, > FIN, et al). If the other end didn't respond, the TCP/IP was to > continue trying, forever, with an increasing length of time between > retransmissions to avoid flooding the net. > > The design principle was that only the higher-level application could > legitimately decide that it was no longer worth trying to communicate, > possibly because whatever it was trying to do was no longer > relevant, or > because it had additional knowledge that further attempts were futile, > or because it had found an alternative way to accomplish its tasks. > > In order to help the application make the decision whether to keep > trying or abort, the TCP/IP implementation was supposed to make > information about the connection behavior available to the > application - > e.g., tell the application that it hadn't heard anything from the > other > side for a while (the "timeout"), or that the other side was > responsive, > but was not willing to take any more data (the TCP window was closed). > > The TCP/IP software was never supposed to close a connection because > of > a timeout - it would only close a connection on instructions from the > application that opened the connection, or on word from the other > end to > close/reset. > > These kinds of timing considerations also caused the protocol itself > to > change as we got some field experience. Early TCP protocol specs have > several fewer states in the state machine than the final TCPV4 because > of that. > > The protocol used "across the wire" between conversing TCP/IP > implementations was well specified. However, if I remember correctly, > the interface ("API") between a TCP/IP and it's user application was > only specified as an example. There were simply too many different > kinds of operating systems and computer environments around at that > time > to define a standard API. IBM 360s didn't look like Unix which didn't > look like Multics which didn't look like Tenex which really didn't > look > at all like the Packet Radio OS (I forget it's name...). DOS, > Windows, > and Macs weren't even around yet. > > It was left to each implementation designer to provide an appropriate > API which best fit into their particular machine environment. > Personally, the PDP-11/40 Unix environment was so limited (32K > memory - > that's K, not M or G), that there wasn't a lot of room for anything > fancy in my API. > > Unfortunately, I think that lack of specific standard API resulted in > some TCP/IP implementations that did not provide the intended kind of > information and control through the API, or that decided on their > own to > abort a connection that "timed out". > > The TCP/IP "service specification" was something like - "Keep trying > until hell freezes over." So, getting back to the original question - > yes, the Internet couldn't "fail" as long as it kept trying. > > HTH, > /Jack Haverty > Point Arena, CA > > From jack at 3kitty.org Wed Oct 28 10:23:13 2009 From: jack at 3kitty.org (Jack Haverty) Date: Wed, 28 Oct 2009 17:23:13 +0000 Subject: [ih] lack of service guarantees in the internet meaning that it cannot ever "fail" In-Reply-To: <67BB1B5B-CF8E-4F33-AB3B-E815E685337A@google.com> References: <4AE5B458.8050702@cs.tu-berlin.de> <4AE6FFD6.3080000@dcrocker.net> <20091028030411.GG1833@zod.isi.edu> <1256711996.3370.122.camel@localhost> <67BB1B5B-CF8E-4F33-AB3B-E815E685337A@google.com> Message-ID: <1256750593.3368.4.camel@localhost> Aaaacck! In my list of ancient operating systems infected by the first TCP/IP...how could I forget the Fuzzball! Nothing was like a Fuzzball! Sorry DaveM.... /Jack On Wed, 2009-10-28 at 09:14 -0400, Vint Cerf wrote: > +1 > > vint > > On Oct 28, 2009, at 2:39 AM, Jack Haverty wrote: > > > On Tue, 2009-10-27 at 20:04 -0700, Ted Faber wrote: > >> It > >> seems clear that the protocol was designed to allow developers access > >> to > >> the TCP timeout and retransmission information in this limited way > >> when > >> it was helpful to them in implementing their application. The base > >> protocol has few extraneous knobs, so I assume this was carefully > >> thought out. > > > > Well, I guess I have some legitimate claim to being one of the > > designers, since I implemented the first PDP-11 Unix TCP and was > > involved in the ongoing TCP and Internet WG meetings and ICCB/IAB way > > back when. > > > > Ted's analysis is right on target. There was considerable thought and > > discussion given to timeouts and timing in general. The specific > > assumptions we made about timing (e.g., the maximum lifetime of a > > packet > > as it wandered around the net) had direct impact on design decisions > > such as the size of the sequence space, and therefore the number of > > bits > > in the various fields in the TCP and IP headers. It also drove the > > requirement for use of a random number generator to pick an initial > > value for the TCP sequence number - if a machine rebooted before all > > of > > its "old" packets were flushed from the net, much confusion could > > otherwise result. > > > > TCP/IP was designed to function in a military environment, where nasty > > things could happen to the net - partitioning, destruction of nodes, > > etc. The design criteria was that the data should get through if at > > all > > possible, no matter how long it took. Networks might go away and be > > "reconstituted" as new hardware was deployed or moved into range. > > > > The implication of this on the TCP/IP machinery was that it should > > never > > give up - things might get better. Connections would be broken only > > if > > the "other end" responded and performed a close or abort sequence > > (RST, > > FIN, et al). If the other end didn't respond, the TCP/IP was to > > continue trying, forever, with an increasing length of time between > > retransmissions to avoid flooding the net. > > > > The design principle was that only the higher-level application could > > legitimately decide that it was no longer worth trying to communicate, > > possibly because whatever it was trying to do was no longer > > relevant, or > > because it had additional knowledge that further attempts were futile, > > or because it had found an alternative way to accomplish its tasks. > > > > In order to help the application make the decision whether to keep > > trying or abort, the TCP/IP implementation was supposed to make > > information about the connection behavior available to the > > application - > > e.g., tell the application that it hadn't heard anything from the > > other > > side for a while (the "timeout"), or that the other side was > > responsive, > > but was not willing to take any more data (the TCP window was closed). > > > > The TCP/IP software was never supposed to close a connection because > > of > > a timeout - it would only close a connection on instructions from the > > application that opened the connection, or on word from the other > > end to > > close/reset. > > > > These kinds of timing considerations also caused the protocol itself > > to > > change as we got some field experience. Early TCP protocol specs have > > several fewer states in the state machine than the final TCPV4 because > > of that. > > > > The protocol used "across the wire" between conversing TCP/IP > > implementations was well specified. However, if I remember correctly, > > the interface ("API") between a TCP/IP and it's user application was > > only specified as an example. There were simply too many different > > kinds of operating systems and computer environments around at that > > time > > to define a standard API. IBM 360s didn't look like Unix which didn't > > look like Multics which didn't look like Tenex which really didn't > > look > > at all like the Packet Radio OS (I forget it's name...). DOS, > > Windows, > > and Macs weren't even around yet. > > > > It was left to each implementation designer to provide an appropriate > > API which best fit into their particular machine environment. > > Personally, the PDP-11/40 Unix environment was so limited (32K > > memory - > > that's K, not M or G), that there wasn't a lot of room for anything > > fancy in my API. > > > > Unfortunately, I think that lack of specific standard API resulted in > > some TCP/IP implementations that did not provide the intended kind of > > information and control through the API, or that decided on their > > own to > > abort a connection that "timed out". > > > > The TCP/IP "service specification" was something like - "Keep trying > > until hell freezes over." So, getting back to the original question - > > yes, the Internet couldn't "fail" as long as it kept trying. > > > > HTH, > > /Jack Haverty > > Point Arena, CA > > > > > > From jnc at mercury.lcs.mit.edu Fri Oct 30 08:20:10 2009 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Fri, 30 Oct 2009 11:20:10 -0400 (EDT) Subject: [ih] ARPANet anniversary Message-ID: <20091030152010.1A6366BE697@mercury.lcs.mit.edu> So, Fox news has a nice little segment about how today is the 40th anniversary of the ARPANet. The usual journalistic mistakes abound: they said it was the 40th anniversary of the Internet - which I can sort of forgive them for - and also the 40th anniversary of the WWW - which is a little less excusable. Still, at least they covered it: I saw nothing on CNN (either the main page, or the technology section), nor on the NYT site (ditto). The thing that was most irritating is that it was very Kleinrock-centric, leaving out Licklider (one especially irritating segment went on about 'computers talking to people', or words to that effect, leaving out Lick's key role in that idea), Taylor, Roberts, etc. Noel From chris at cs.utexas.edu Fri Oct 30 10:13:50 2009 From: chris at cs.utexas.edu (Chris Edmondson-Yurkanan) Date: Fri, 30 Oct 2009 12:13:50 -0500 Subject: [ih] ARPANet anniversary In-Reply-To: <20091030152010.1A6366BE697@mercury.lcs.mit.edu> References: <20091030152010.1A6366BE697@mercury.lcs.mit.edu> Message-ID: NPR's All Things Considered yesterday seems to be the best of the descriptions, written by Guy Raz NPR: http://www.npr.org/templates/story/story.php?storyId=114280698 And Guy Raz will have more stories, the next one will be tomorrow Oct 31st about ARPANET's beginnings on Saturday's All Things Considered. Chris On Oct 30, 2009, at 10:20 AM, Noel Chiappa wrote: > So, Fox news has a nice little segment about how today is the 40th > anniversary > of the ARPANet. The usual journalistic mistakes abound: they said it > was the > 40th anniversary of the Internet - which I can sort of forgive them > for - and > also the 40th anniversary of the WWW - which is a little less > excusable. > > Still, at least they covered it: I saw nothing on CNN (either the > main page, or > the technology section), nor on the NYT site (ditto). > > The thing that was most irritating is that it was very Kleinrock- > centric, > leaving out Licklider (one especially irritating segment went on about > 'computers talking to people', or words to that effect, leaving out > Lick's key > role in that idea), Taylor, Roberts, etc. > > Noel Chris Edmondson-Yurkanan (chris at cs.utexas.edu) Contact info: www.cs.utexas.edu/~chris/ From rickt at rickt.org Fri Oct 30 10:43:32 2009 From: rickt at rickt.org (rick tait) Date: Fri, 30 Oct 2009 10:43:32 -0700 Subject: [ih] ARPANet anniversary In-Reply-To: <20091030152010.1A6366BE697@mercury.lcs.mit.edu> Message-ID: I was lucky enough to attend the "40th Anniversary of the Internet" conference yesterday, a one-day event held at the UCLA Henry Samueli School of Engineering & Applied Science. Prof. Kleinrock acted as MC throughout the day, Nicholas Negroponte gave an excellent keynote, and a wide-ranging series of panels and moderated discussions were held. It was jolly good, and I had a great time. The program list is here, and includes a full list of the speakers and points of discussion: http://www.engineer.ucla.edu/IA40/program.html I understand the entire event was broadcast live on ustream, and was also recorded so anyone can watch any of the speeches or discussions. They are archived here: http://www.ustream.tv/channel/internet-40th-anniversary-ucla I'd highly recommend watching the speech by the newly-appointed Director of DARPA (Regina E. Dugan); it was so very refreshing to see a fairly young, non-male head of a serious US.gov agency. And she really seems to "get it". Her speech is here: http://www.ustream.tv/recorded/2448272 Best, Rick Tait UNIX Engineer & on-net for 19 of those 40 years. For some reason, Noel Chiappa said on 10/30/09 8:20 AM: > So, Fox news has a nice little segment about how today is the 40th anniversary > of the ARPANet. The usual journalistic mistakes abound: they said it was the > 40th anniversary of the Internet - which I can sort of forgive them for - and > also the 40th anniversary of the WWW - which is a little less excusable. > > Still, at least they covered it: I saw nothing on CNN (either the main page, > or the technology section), nor on the NYT site (ditto). > > The thing that was most irritating is that it was very Kleinrock-centric, > leaving out Licklider (one especially irritating segment went on about > 'computers talking to people', or words to that effect, leaving out Lick's key > role in that idea), Taylor, Roberts, etc. > > Noel From jnc at mercury.lcs.mit.edu Fri Oct 30 10:52:55 2009 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Fri, 30 Oct 2009 13:52:55 -0400 (EDT) Subject: [ih] ARPANet anniversary Message-ID: <20091030175256.97E426BE69D@mercury.lcs.mit.edu> > From: jnc at mercury.lcs.mit.edu (Noel Chiappa) > I saw nothing on CNN (either the main page, or the technology > section), Ooops, spoke too soon: http://www.cnn.com/2009/TECH/10/29/kleinrock.internet/index.html This one is even more bogus, though, sigh - "Web Pioneer"? > From: Chris Edmondson-Yurkanan > NPR's All Things Considered yesterday seems to be the best of the > descriptions, written by Guy Raz Well, I'm glad at least _one_ news organization seems to be able to walk and chew gum at the same time! Kudos to NPR for doing a better job than their other media brethren. C'mon, news-people, it's not like you need to go into obscure distant dusty paper archives to get correct info! (And people wonder why the news media have such low favourability ratings...) Noel