Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

chicksdaddy writes: "The pervasiveness of the NSA's spying operation has turned it into a kind of bugaboo — the monster lurking behind every locked networking closet and the invisible hand behind every flawed crypto implementation. Those inclined to don the tinfoil cap won't be reassured by Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP. (Video with time code.) Researchers at the time were working on just such a lightweight cryptosystem. On Stanford's campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described the functioning of a public key cryptography system. But they didn't yet have the algorithms to make it practical. (Ron Rivest, Adi Shamir and Leonard Adleman published the RSA algorithm in 1977). As it turns out, however, Cerf did have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren't they used? The crypto tools were part of a classified NSA project he was working on at Stanford in the mid 1970s to build a secure, classified Internet. 'At the time I couldn't share that with my friends,' Cerf said."

It would be utterly obsolete by now and would just be a legacy function that would have to be supported for legacy apps and would be a security swiss cheese. TCP is better off just being a pure transport later protocol with modern crypto layered on top.

Not sure if you meant to imply otherwise, but SSL certainly makes a website slower. No, on most devices, there's plenty of CPU available to do the actual encryption, so that's not usually a problem. But there's still the initial handshake to consider, and it still disables shared caching. And of course, there's a lot of devices that use HTTP that don't have desktop-class CPUs, so the CPU issue isn't as non-existent as you might assume.

This is a classic solved problem in computer science: chose an algorithm that you can support in the generation of machines you plan to deploy, even if it's slow in the lab.

MIT specified an amazing fast processor for Project Athena, an entire 1 MIPS. Unheard of! Of course, it was perfectly normal when Athena rolled out. [Origin: the guys there explaining we could use the DEC 2100s we already had at York if we wanted to deploy Athena]

This is a classic solved problem in computer science: chose an algorithm that you can support in the generation of machines you plan to deploy, even if it's slow in the lab.

Yeah, and now computers are so fast, that the encryption is suspect.

Think about it - GSM has been around for 20 years and its encryption has been hacked.for the past half-decade, if not more. And why? Because back then, the encryption was pretty much unbreakable with equipment of the day and implementable on hardware available at the ti

TCP is transport layer. IP is not. (at least by the OSI model and I think the TCP model though I'm a bit rustier on that one - Network layer is IP)

There is no reason to imagine TCP/IP could not have included Session or higher level encryption protocols without really affecting the TCP or IP parts of the protocol stack. The design could well have been exactly as you suggest.

x86 has been updated so much that a modern x86 processor could be accurately described as a hardware x86 emulator. I'd really like it if Intel could introduce an 'x86-2' instruction set that dumped all the legacy stuff but kept the same basic architecture. It'd need software to be recompiled, but not rewritten. Make it 64-bit from the start, remove such oddities as the BCD instructions and the old 24-bit protected mode and 20-bit real mode. It'd be expensive, but if they can coax just a few percent extra ou

a chip that would be 3-4 months faster, at the expense of being binary incompatible with all existing software, and be effectively the same design as current would be a bone-headed move.

Which apple did 8 years ago when they moved away from PowerPC. I worked on maintaining separate architecture builds of software for unsupported version machines nearly 3 years ago. Also a friend who is locked out of ever getting past Mac OS 10.4.11 precisely due to binaries. One the good side, the OS busted the 32-bit 4GB-ram barrier natively long before Windows Vista was out. Arch dumping can be done, but sweeping changes working for a 1% isn't the same as scaling up to 90%+

the difference is that those moves were for aubstantial performance gainst and dev elopment cost reductions. this would be minimal improvement, if any, in performance, and a large retooling cost for a small reduction in eventual unit cost.

I'd really like it if Intel could introduce an 'x86-2' instruction set that dumped all the legacy stuff but kept the same basic architecture. It'd need software to be recompiled, but not rewritten. It'd be expensive, but if they can coax just a few percent extra out of the hardware by dumping legacy then it'd still sell to the HPC and server markets. Recompiling linux and packages is a small price to pay.

Recompiling Linux and packages. That has worked out so well for ARM servers, so far.

I think that's a terrible idea. I don't think the 20-bit real mode, etc., are actually used except for the BIOS, which is in the process of being replaced by UEFI, and I'm not sure all of those instructions actually still work.

But the big thing about Intel is the idea that you can just take whatever x86 software and run it. Maybe recompile if you have something that can take advantage of the SIMD instructions, but it doesn't

The x86-2 mode you are looking for is 64 bit mode. It pretty much resolves the painful problems with x86, all your left with that's 'bad' are some obnoxious to use opcodes... That only compiler authors and asm programmers have to deal with.

At that level, those op codes don't even really bother them compared to the more detailed bits of asm.

You're right, except for one detail: All the 32-bit support is still in there! Backwards compatibility was too important to abandon, and even 64-bit operating systems often have 32-bit bootloaders. What we have now are 64-bit processors designed to be backwards compatible with 32-bit processors designed to be backwards compatible with 16-bit processors designed to be backwards compatible with the chip that started the whole chain, the 8-bit 8080.

It's true, that had the NSA chosen to share that info, we could have had better security. On the other hand, the NSA were the ones that developed it, so if not for the NSA, it would not have existed to use.

This. I remember back in the early 90s when I worked for the Department of Veterans Affairs and lots of data needed to be encrypted. It was fairly simple encryption by today's standards (DES?) but still required a separate encryption card in order to operate at sufficient speed. Adding that to every TCP/IP packet? It would have stopped Linux in its tracks.

The reason TCP/IP proliferated was because it was light-weight and easy to implement. Crypto would have killed that.

There would have been more resistance to adopting it, too.

As it was, there was substantial resistance among people and institutions sited outside the US, because the Internet was a DARPA project, i.e. U.S. Military. Other countries, organizations within them, and even some people in the US, were concerned about things like what t

That and if Novell had implemented a network ID registration entity. Many Novell installations used network ID 00:00:00:01 because that's what was in the manual. This made them unconnectable for all intents and purposes.

If TCP/IP had encryption way back when, it never would have worked because it's too slow. Shit, stuff was so slow that people turned off checksumming. Imagine having to do something exciting, like actual encryption. It'd be worse than running a 300 baud modem.

At the time the Internet was the (D)ARPANET and export to other countries wasn't really on the horizon anyway. I think had this gone into place, the headline would be "Internet may have been commercially adapted decades sooner, if not for built-in security mechanisms."

Different parts. The packet-switching technology was military in origin - they were seeking a new form of communication network that could continue to operate without downtime in the face of massive physical damage, like cities being nuked. Academia soon adopted the technology, and the early internet culture came from there.

The packet-switching technology was military in origin - they were seeking a new form of communication network that could continue to operate without downtime in the face of massive physical damage, like cities being nuked. Academia soon adopted the technology, and the early internet culture came from there.

The Internet was NEVER owned by no one.It isn't a magic kingdom. It's hosted on servers and backbones that were *always* owned by someone(s). So the 'free as a bird' perspective is just blatant fantasy.

The earliest Internet tech was developed for DARPA/USGOV. It also appeared around the same time in academic uses. Neither of these was 'free' nor 'uncontrolled'.

It may have been not heavily policed in the early days, because nothing much of general public interest (or interest to the movers and shakers) was

Rather misleading article and slant there. It implies that the NSA deliberately took action to make TCP/IP insecure. However, in reality, the NSA merely didn't contribute their classified work towards the specification of TCP/IP. And frankly, that's a good idea. The overhead of encryption at that time would have been too much. Additionally, cryptography only gets better with time, so whatever algorithm that would have been selected would have long since been obsolete. And due to backwards compatibility, would still have to be implemented. After all, things like routers and such are a tad more difficult to update than programs.

Yep, and likely was NSA research, which is a typical exploration into the subject... much like any research university.

It's when the politicians and generals (aka customers) decide to take research out of R&D and into production is when people cry foul. ThinThread-TT (sure the agency doesn't use thinthread, but likely uses a variant of its design in today's system, regardless of what TT creators say) is a great example.

Rather misleading article and slant there. It implies that the NSA deliberately took action to make TCP/IP insecure. However, in reality, the NSA merely didn't contribute their classified work towards the specification of TCP/IP.

Yes, Slashdot is rather sad these days.

But the NSA isn't just about withholding classified information. The NSA is about weakening encryption standards. [wikipedia.org] Vint Cerf said he would have used encryption if he had the opportunity to do it over again. The Internet community had such an opportunity, IPv6 with IPsec, and the NSA bungled it up. [infosecuri...gazine.com]

IPsec doesn't involve the routers, because that would kill performance. IPsec is designed to handle different algorithms, so you don't need to support the same broken algorithm

Believe it or not, there are also some instances where cryptography is not needed, such as for purely publicly accessible information that can benefit from being cached, etc.

I don't think there is any instance where cryptography would not be useful, as long as privacy is an option. Most Internet communications are point-to-point, so caching should not be done in between. From an opsec point of view, it's less risky to use encryption for confidential information if you also use encryption for everything else, too. [businessweek.com]

Even for publicly cached data, you could use cryptography for authenticity instead of confidentiality. For example, DNSSEC [icann.org] is about proving the authenticity of DNS inf

Most Internet communications are carried in packets with unique source address and unique destination address. Conceptually, it doesn't matter whether those packets are encoded with Point-to-Point Protocol [wikipedia.org] on a serial cable, or whether they go through a bunch of routers first. A more pedantic term is unicast. [wikipedia.org] So, the actual counterexample would be multicast, and despite best efforts, there's very little of that on the Internet.

The real exception to point-to-point communications is WAN acceleration, [networkworld.com] but I'

TCP incurs some overhead. Where you don't want that overhead, you can use UDP.Also, some applications do not require the "conversation" or "bidirectional stream" model that TCP provides. UDP fits the bill here.

Most things don't use the entire stack.TCP/IP needs to be seperate layers because you don't want to use TCP for everything.

Everything on the internet has an IP address, so that is the universal internet layer. You can put TCP or UDP or any number of more obscure layers on top of that.

Most applications squish the sesson,presentation,application layers into one, keeping them seperate is optional, there isn't a separate encapsulation header for each just a session flag to keep track the individual connection.U

Though in some applications it has gotten silly. Many applications communicate over HTTP because it's the one protocol you can be confident of getting past a corporate network firewall and proxy, even if they have traffic like push IM messages or real-time media that HTTP wasn't designed and isn't suited for.

I do not see why TCP and IP could not have been created as single layer.

That was one of the major divergences from other networking schemes of the time that gave TCP/IP an advantage.

IP is a lower layer than TCP. It's about getting the packet from router to router, and is as deep into the packet that core routers have to look to do their jobs. Core routers are supposed to be "as dumb as rocks", putting as little effort as practical into forwarding each packet, in order to get as many of these "hot potatoe

There were individuals and organizations back in the seventies and eighties that got in trouble with the US Government for writing and publishing software that used strong encryption. The problem was that the published code was visible from outside the US and ran afoul of ITAR regulation (citation: check the history of PGP). Incorporating strong encryption in TCP/IP would have made its use and adoption subject to US ITAR regulation.

What do you do about countries like the US that still limit the export of strong encryption as a military munition? How about countries which will not permit their citizens access to such encryption? And how do you get the assorted governments of the world to agree upon and implement one standard? The internet isn't some kind of nationless paradise where information gamboles on the green and frolics in the sun. More like the Wild West, with shark-wielding lasers, hookers and blackjack thrown in.

Whenever I hear anti-NSA rhetoric, I ask: imagine the same things being said about Alan Turing et al working to decode Germans' messages... Would Mr. Snowden receive the same respect and adoration, if he published the secrets of Bletchley Park [wikipedia.org] in 1943?

How about the horrible "privacy invasion" that provided for intercepting of Zimmerman's telegram [wikipedia.org].

Not excusing everything NSA is doing these days, but putting things in perspective...

In 1943 Mr. Snowden would have been quite lucky if he got a trial before he was executed. We were fighting for our lives back then.
As to the rest, it is a matter of scale. In 1790 I could follow you around and publish your daily activities in the paper. Unless you hired 50% of the population to be reporters to follow the other 50% and then switched them off every other day, no one could possibly publish what everyone did in every country every day.
In 1980 the CIA/NSA/KGB/MI5/MI6/Mossad/etc. could do a fai

Bletchley Park is in the UK. No doubt he would have hung, just wondering if he would have had a public trial. My guess is not.
And yes - non-state actors are a bitch because they don't have anything you can threaten. The USA attacking their "home" country is often a GOAL of theirs, not a fear.

Re Would Mr. Snowden receive the same respect and adoration
Yes as US gov protections in place for just such legal events eg safe from US gov surveillance without a warrant.
If you see the US Constitution protections been removed via color of law efforts you have the duty, right and responsibility to bring such facts to the US publics attention.
The US political and legal system can then correct the legal issues.
The US legal issues raised by Snowden are easy to understand in an open court by most legal p

Yes as US gov protections in place for just such legal events eg safe from US gov surveillance without a warrant.

Snowden's published revelations cover much more than (admittedly reprehensible) warrantless spying on US citizens. For example, he revealed NSA's capability to record all telephone traffic of a foreign country [techcrunch.com].

Anyone alerting the Germans in 1943, that Enigma is compromised, would've been (justly) denounced as a traitor... What changed?

Wow, it's always a tough competition, but this may win "Ridiculous Slashdot Headline Of The Week".

Logic 101, folks. Let's recap that headline:

"TCP/IP Might Have Been Secure From the Start If Not For the NSA"

Now, what's the story here? One of TCP/IP's designers had access to some then-bleeding-edge crypto *that was part of an NSA project*, but couldn't include it in TCP/IP because it was secret.

Now, can we support the idea that "if not for the NSA" that crypto could have gone into TCP/IP? No, because "if not for the NSA" that crypto *wouldn't have fucking existed at all*. The NSA wrote it. So the choices are "code written, but not available for use" or "code not written at all". Practical difference for the purposes of TCP/IP: zip.

It would be one thing to encrypt all traffic end-to-end with a Diffie-Hellman exchange per TCP connection. But it would be quite another thing to prevent active attacks from three-letter agencies. You'd need a way to establish and ensure trust as well. If they can't decrypt the connection itself, they can use an active attack to intercept it and decrypt it. Even if the target is using SSL with PFS, they could always national-security-letter a signed certificate out of a CA in their jurisdiction. It doesn't

The people who invented TCP/IP weren't even thinking about security. The network they imagined was one that went between a few buildings on the same campus. Nobody dreamed of the need for security at that point, any more than Alexander Graham Bell was thinking about voice security when he invented the telephone.

There seems to be more NSA shills here now, using faulty logic to defend NSA, such as crypto being too slow then, and that it is right to withheld crypto.

The choice of using crypto on the net would have been nice to have back then, for stuff like protecting people, nations, businesses, even if the crypto was slow. So the job of NSA is obviously not to protect USA, but weaken it, and others. There were faster crypto back then, which of course also was weaker, but could have been strengthened by such methods

I was one of the leading team members at System Development Corporation (SDC) in the 1970's on various secure operating system and secure networking projects for various US and UK governmental bodies.

Some of that work was classified, much was not.

In late 1974 David Kaufman and I were working on network security, particularly on the then monolithic TCP (there was at that time no formalized underlying datagram IP layer.) Among other things we were designing and building a multi-level secure nework, with mult

Encryption can be applied at various layers. You can have link-layer encryption (level 2), network-layer encryption such as IPSec (level 3), transport-layer encryption such as SSL (level 4) and application-layer encryption such as SSH (layer 7)

they're actually talking about layer 3 (network layer) encryption... which is entirely possible if you want to slow down the entire routing of the entire network... yes current encryption is in the presentation/application layer, (6/7) the idea is that it could have been implemented at a much lower layer in the stack, had Cirf been allowed to take his work that he did for the NSA to his work on TCP/IP.

although to be fair I doubt it would have been implemented, or been optional. as the networking speeds of

Really dont like calling people out but youre pretty full of BS. By your own admission you do LED work growing Cannabis, which hardly relegates you to being an OSI expert, and when I google your name (initials: A. M.Q.) + OSI or RFC, nothing at all comes up.

According to your google+, you graduated HS around the same time I did, which means you were in middle school when the OSI model was formalized. It would be mightily impressive if you "wrote Layer 6" before entering high school.

The headline is horribly horribly misleading. I hope people at least RTFS.

I read the summary, and it seems to be aligned with the headline:Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP

Oh, by the way, "bleeding edge cryptographic technology" is something you never ever want to use.

It was "bleeding edge" in 1975 back when TCP/IP itself was still in its infancy, but would have been refined over time.

the world's favorite intelligence agency may have also stood in the way of stronger network layer security

But that is misleading. The NSA did not "stand in the way". The just declined to help. That is not the same thing.

The research existed, Cerf had access to it, but they didn't allow it to be used.

If your house is burning down and the fire chief prevents you from using the fire hydrant in front of your house even though you have the right equipment to hook up to it, wouldn't you say he's standing in the way? He's not just declining to help, he's actively preventing you from using tools and knowledge that you have because he's afraid that other people will see you do it and then they'll know how to fight their own fires.

It also at the time would be been considered a state secret. Until the late 90s publishing any of a huge number of crypto tools to the international community was illegal. So even if he had permission to publish this research to the US, it couldn't be given out internationally. That's not the "NSA"s decision, that's was much higher up than them.

The NSA didn't tell Cerf not to use this cryptography scheme. Cerf didn't even ask. He was working on a classified research project(NSA cryptography) and working on a unclassified academic experiment(TCP/IP).

I keep fish as a hobby. I have a friend who researches new antibiotics. Do you think my friend's employer is "standing in the way" when he doesn't give me the latest and most potent antibiotics which aren't even publicly available to treat my fish?

the world's favorite intelligence agency may have also stood in the way of stronger network layer security

But that is misleading. The NSA did not "stand in the way". The just declined to help. That is not the same thing.

Maybe by your standards. Kind of like being next to someone who's breathing machine came unplugged, yet you refuse to help by walking over and plugging it in. At some point, in-action is as bad as action. Those with the power to easily help with no risk or effort, yet don't, are just as bad as those who purposefully are bad.

The NSA has two conflicting tasks:(1) Secure national communications.(2) Break other countries communications.

This made sense in the 1950s when secure encryption was something only the military, spies, etc used. It breaks down badly in the internet, international era.

"They declined to help" hides the fact that _that was their job_. They are the national, even world experts on the problem, and they stood backand allowed a broken internet security model. Elsewhere, they've made swiss cheese of encryption stan

The headline is horribly horribly misleading. I hope people at least RTFS.

Exactly. This isn't a "would have been" that failed because of NSA involvement. This is a "would not have been" that failed all on its own. The NSA had some confidential tools at its disposal that may have been able to salvage the idea, but them not sharing their tools is hardly a reason for us to be shaking our fists and saying "it would have worked if not for them". It's like blaming a toll road for your late arrival after choosing to take public streets instead of the toll road. It makes no sense.

I'd imagine if the NSA did have their hands in helping to secure internet communications, every country would have been up in arms last year, and the internet would be completely fractured by now.

Their non-involvement was a good thing, not a bad thing. Now, we currently know there are better things that can be done to secure the internet, but not having implemented them yet does not mean things are bad right now either.

The only way to hide traffic path is through partial-information relaying - the Tor approach. Nasty overhead. But even the most pathetic payload encryption would really make a huge difference - it would mean tapping all traffic at a trunk would require dynamically following hundreds of thousands of conversations betweeen tens of thousands of nodes. The NSA could do it, a lot of smaller governments couldn't.

Also, even a DH key exchange without any public key authentication at all is still somewhat effective: Yes, it can be MITMed with ease, but such an attack is also very detectable if you have a side channel, which means any untargetted mass-monitoring operations would be swiftly noticed.

Also, even a DH key exchange without any public key authentication at all is still somewhat effective: Yes, it can be MITMed with ease, but such an attack is also very detectable if you have a side channel, which means any untargetted mass-monitoring operations would be swiftly noticed.

Perhaps a stupid question (not a crypto expert here), but if you have a not-easily-MITMed side channel, wouldn't you use that for key exchange? Or at least to verify the keys?