Most users want the risk management paradigm where they buy insecure systems that are fast, pretty and cheap, then occasionally deal with a data loss or system fix. The segment of people willing to pay significantly more for quality is always very small and there are vendors that target that market (e.g. TIS, GD, Boeing and Integrity Global Security come to mind).

So, if users demand the opposite of security, aren’t capitalist system producers supposed to give them what they want? It’s basic economics Bruce. They do what’s good for the bottom line. The only time they started building secure PC’s en masse was when the government mandated them. Some corporations, part of the quality segment, even ordered them to protect I.P. at incubation firms and reduce insider risks at banks. When the government killed that policy & demand went low again, they all started producing insecure systems again. So, if user demand is required and they don’t demand it, who is at fault again? The user. They always were and always will be. "

So, it’s 2016. I’ve learned quite a bit since then. I learned about the following events in terms of developers/businesses doing what you asked:

Burroughs does first mainframe (1961) that’s immune to most code injection, performs well, supports high-level code for long-term benefit, and so on. Most buy IBM’s System/360 for backward compatibility with IBM garbage plus raw performance benefit. Burroughs survives in form of Unisys but eliminates hardware protections & focuses on price/performance/compatibility. Pay attention as you’ll see that again and again. ;)

During minicomputer era, quite a few companies show up to have extra security on hardware or software level. They’re all also-rans except for System/38 and OpenVMS. System/38, per user demand, eliminates hardware-level security in favor of POWER compatibility with increased price/performance. OpenVMS solid enough it gets retired from DEFCON. Many get off of it for machines with more features or speed at lower price. Many refuse to get on for backward compatibility with insecure systems. UNIX flourishs for performance/capabilities on cheaper hardware while eliminating security and maintainability benefits from MULTICS project that preceded it. And simpler ones. Those dominating markets are whoever crams most features and speed into cheapest boxes with backward compatibility. Intel tries with i432 and BiiN projects to change things at a loss of $1-2 billion when nobody buys it because it wasn’t backwards compatible and is slower.

Microcomputer era happens. Security is ingored entirely by users so they can squeeze most performance and features out of boxes at lowest prices. Newcomers are welcome for a while as systems are too simple to really get backwards compability. IBM, Apple, Microsoft, and Amiga have strong offerings with IBM’s the most robust and Amiga’s the most powerful. Apple’s is cool, insecure, and affordable with Microsoft’s having business apps, insecure, and affordable. Apple and Microsoft win.

Next PC market. We have NextStep, WinNT project, BeOS, and some more secure desktop attempts. The secure desktop attempts fizzle out since nobody buys them. NextStep is insecure UNIX mixed with productive, beautiful UI. It sells well. BeOS redefines core of OS for insane level of concurrency and responsiveness on that time’s hardware with a microkernel, too. WinNT mixes new hardware (performance), good core (VMS), insecure implementation (time to market), and backward compatibility with kernel and user-mode code from insecure platforms. Apple buys NextStep, Windows NT makes Microsoft billions, and BeOS dies.

Mobile market. Lots of so-so OS’s with various interfaces. In a rare win, the one aiming for some kind of security gets dominant since it aimed at business productivity. The truly secure ones stay at around $3,000 due to low volume. Almost nobody cares but still enough market to sustain them. Apple puts a mini-Mac on a phone. Then almost nobody cares about security and Bill Gates starts looking poor. A surveillance company then buys Android, keeps mixing insecure platforms for ecosystem, and becomes biggest in terms of sales. Blackberry, which rebuilds on a more secure & reliable OS (QNX), dies since nobody wants to buy it or build apps. The others already died. The cryptophones, due to user demand, begin porting their stuff to insecure Android so they can also run Android’s surveillance-oriented apps but in “hardened” way. Something similar happens with tablets where insecure one dominates for convenience and app ecosystem whereas QNX-based Playbook was extremely impressive technologically but not enough apps or users.

Server apps. The problem that things were too hard to do or configure securely was well-recognized. Companies built on things like OpenBSD and hardened Linux to build appliances that were easy to use, more secure, and relatively affordable. Consumers bought garbage instead. Companies like DefenseWall and Sandboxie made brainless solutions for Windows security at low prices. Consumers ignored them. Solutions showed up for big companies even pentesters didn’t breach for DNS (OpenBSD BIND or Secure64), web (Hydra), email (djbdns), and so on. Most companies didn’t use them. Even the most plug-and-play systems with five minutes configuration selling for lower than the big dogs had tiny market share.

Consumer apps. Signal vs Facebook Messenger. SpiderOak vs Dropbox. Easy, encryption apps vs storing plain files. Simple terms with FOSS license and usability vs long EULA with bullshit. Private and cheap vs surveillance-oriented and free. It can come down to a free or $1 messaging program that’s just as easy to use as the insecure one. They still won’t use it.

Conclusion

Computer security started accidentally with well-designed mainframe. Then there were well-designed minicomputers. Then there were more secure desktops. Then there were robust desktops. Then there were mobile and tablet offerings that were more secure or robust. At each step, user demand for things other than security forced suppliers to weaken the properties of the systems to remain competitive. Not just maintain profit: they had to eliminate security to even exist given both projects by big companies and startups doing INFOSEC both easy and right mostly disappeared when nobody bought them. If users don’t buy quality or security, then it’s their fault when the supply side produces neither. Matter of fact, our economic system in the U.S. even expects companies to deliver what users want no matter how much bullshit or damage is involved outside a few things that are illegal.

So, the problem is the users. That’s why I don’t expect to be a “unicorn” startup making an easy-to-use messenger that’s actually efficient, reliable, and secure. Many companies have tried to market these as you’d advise them to. Instead, unencrypted email, very-insecure IM, Facebook, unencrypted text, WhatApp, and Slack dominated in that area at various times while the secure ones disappeared or made almost nothing. The insecure winners are still on top now that vetted, secure alternatives are easy, cheap, or free. I rest my case.

Really, security didn’t become important until the late 90s. People didn’t have critical data on computers, didn’t need to worry about state level actors, and didn’t have the Internet to connect to, let alone an intranet. It could count to reliability, but back then, people still weren’t running as much stuff. Even then, security is something that needs to fade in the background and be omnipresent, otherwise people will disable it if it gets in the way, or worse, interrupts the dancing bunnies.

BeOS

BeOS was great, but it definitely wasn’t secure. Everything ran as root. (or was it baron?)

Mobile market.

Arguably the secure one still won here. Apple is very serious about iOS security and actively protects against state level actors, even the NSA. RIM bent over to any TLA that asked. (And BlackBerry OS 10 isn’t very good IMHO. I speak from using it as my daily driver.)

It was understood to be important to government and big business by 80’s with products sold by early 90’s for high-security. The co-inventor of INFOSEC, Roger Schell, was meeting with non-IT industry execs via Black Forest group to evangelize high-security. He said they wanted it but CIO’s suspected IT vendors wouldnt do it because they were shipping defective products then charging for fixes. They gave up. Low-assurance stuff went msinstream some time after and high-security fell on deaf ears even as awareness increased.

Re BeOS. Yeah, it wasnt secure. I was using it as example where supplier made things better at foundation with it much more robust. However, backward compatibility and convenience trumped superior design. That often happens for secure stuff.

Re Apple. Apple had BS security with various subversions and idiotic flaws going way back. iPhone has recently gotten to be No 1 in secufity with RIM just a govt lapdog as you said. People bought Apple for features and cool factor, though, while almost nobody bought hardened Windows or Android phones with crypto. That’s more my point.

Matter of fact, our economic system in the U.S. even expects companies to deliver what users want no matter how much bullshit or damage is involved outside a few things that are illegal.

So, the problem is the users.

Or maybe the problem is our economic system. Look what commercialization did to interoperability: 20 years ago, protocols were defined in RFC and had multiple independent implementations, and some of those still survive to this day (TCP/IP, DNS, SMTP, HTTP); some time after that, it became competing proprietary protocols doing the same thing (e.g., ICQ vs. AIM vs. MSN messenger vs. Skype vs. …), constantly morphing to discourage reverse engineering. Any move towards interoperability required creating yet another standard (Jabber). Ironically, the current push towards secure protocol results in further proliferation of mutually incompatible protocols (OTP for Jabber, Whisper Systems' Signal, Surespot, Telegram…).

Or maybe the economic system is not the problem; after all, the industry somehow succeeded to get together and create one IPSec to rule them all, its faults notwithstanding. But I’m not a socioeconomist, so I don’t know what the problem is, I can only describe the symptoms. I just don’t think we’ll get closer to secure Internet without coöperating on open standards.

It’s a combination of the economic system and user demand. The bad parts of the economic system are companies doing what’s strictly in their personal interest even if it’s detrimental to others. Also even if it’s detrimental to themselves in the long-term but executives incentivized to worry about short-term. The other part is users that basically never pay for alternatives enough for them to be competitive with the insecure, first mover. Many businesses will build what you want to buy. Very few build secure stuff because most that did just go away with no sales. So, user demand is to blame for any aspect of that problem. Other aspect of that is backward compatibility where people get locked-in to insecure stuff then refuse to make sacrifices to get off it. I think it’s vast majority of the problem in key categories.

“ I just don’t think we’ll get closer to secure Internet without coöperating on open standards.”

I agree. The high-assurance strategy in the past was always to cheat by using high-security VPN’s over the Internet with custom protocols for intranets or interactions with other companies. Plus dedicated lines that aren’t Internet accessible. Or no Internet at all for key applications. Sad that it comes down to those.

It means creating security that works, given (or despite) what people do

Understand that while a software bug and an easily guessed password can both give up all your secrets, The Programmer believes they can only do something about one of those.

This article does otherwise ring true with me, but doesn’t actually offer any advice on how to do this: To stop fixing the user, we need to start fixing the programmer. Programmers want to be told what to do, they’re looking desperately for best practices that they can use instead of thinking so much (thinking is hard), and yet I feel like with security we should be able to have a modern checklist.

Let the user enter whatever password they want, as long as they can do it twice

If you ask for their email address, send them an email so that they know that it works

Don’t email people links. Tell them you don’t email them links. Tell them they should already know about your website and if they don’t, sorry you forgot we didn’t mean to spam you.

Is that it? Is that good enough?

Or should we say that authentication is hard? Google and Microsoft and Facebook seem willing to deal with these things, and monitor for suspicious activity, and they’re popular enough that the user is likely to have a better relationship with them, and yet they want us to use OAUTH2 which is really complicated and difficult, and shoot just adding a password field here is easy enough.

I would add: give the user the option to use such standard/common hardware tokens as exist. Which at the moment probably just means supporting google authenticator and whatever the iOS equivalent is.

Or should we say that authentication is hard? Google and Microsoft and Facebook seem willing to deal with these things, and monitor for suspicious activity, and they’re popular enough that the user is likely to have a better relationship with them, and yet they want us to use OAUTH2 which is really complicated and difficult, and shoot just adding a password field here is easy enough.

Oauth2 really isn’t that complicated and difficult. Or rather, it is if you implement it yourself, but people have done that already; as an ordinary developer you just drop in a library. If anything I’d say the main reason developers don’t use it is fearmongering from security/privacy-oriented people.

Our goal should not be to make a ball of mud so big it has every line of code one could ever want in it.

If anything I’d say the main reason developers don’t use it is fearmongering from security/privacy-oriented people.

You should ask people that. Here you have at least two people saying it’s complicated. I just asked on IRC and got another one. I’ve never heard anyone say they don’t use OAUTH2 because it’s insecure except insofar as it is a lot of code and unlikely to be correct.

Our goal should not be to make a ball of mud so big it has every line of code one could ever want in it.

How is that supposed to relate to what I wrote?

You should ask people that. Here you have at least two people saying it’s complicated. I just asked on IRC and got another one. I’ve never heard anyone say they don’t use OAUTH2 because it’s insecure except insofar as it is a lot of code and unlikely to be correct.

I’m talking from experience. I’ve asked people why they set up their own username/password auth rather than using facebook or google and got privacy-fearmongery “I don’t want them having access to all that information”-type answers. Not “it’s insecure”, but a concern that seems to come from the same culture that’s most worried about security.

Our goal should not be to make a ball of mud so big it has every line of code one could ever want in it.

How is that supposed to relate to what I wrote?

That someone has done something complicated doesn’t make it less complicated.

It doesn’t even necessarily make my life easier, because adding a library to my code doesn’t actually remove problems or remove complexity, it simply trades one problem (writing some code) for another problem (tracking someone else’s codebase, making sure I can incorporate updates and fixes, etc).

I think it was as clear as mud.

I’m talking from experience.

How pompous! Does no one have experience but you?

I’ve asked people why they set up their own username/password auth rather than using facebook or google and got privacy-fearmongery “I don’t want them having access to all that information”-type answers. Not “it’s insecure”, but a concern that seems to come from the same culture that’s most worried about security.

That surprises me.

I would never have expected people to say that.

I could understand their users not wanting to give Facebook or Google their details; after all if a user doesn’t trust Google/Facebook/Microsoft to identify them, then it’s not “privacy-fearmongery”. If a user is using a work computer, and doesn’t want to associate their personal family photos with some work-related service, then it’s not “privacy-fearmongery”.

However anyone who is worried about Google knowing their users use their service, should simply ban Google Chrome.

That someone has done something complicated doesn’t make it less complicated.

The whole point of software is that it does make things less complicated. Otherwise we wouldn’t bother.

It doesn’t even necessarily make my life easier, because adding a library to my code doesn’t actually remove problems or remove complexity, it simply trades one problem (writing some code) for another problem (tracking someone else’s codebase, making sure I can incorporate updates and fixes, etc).

A well-written library makes those things not a problem. Again, that’s its raison d'etre.

How pompous! Does no one have experience but you?

Not at all. I was bristling at you saying “You should ask people that” as if I hadn’t.

The whole point of software is that it does make things less complicated. Otherwise we wouldn’t bother.

Total nonsense.

It doesn’t even necessarily make my life easier, because adding a library to my code doesn’t actually remove problems or remove complexity, it simply trades one problem (writing some code) for another problem (tracking someone else’s codebase, making sure I can incorporate updates and fixes, etc).

A well-written library makes those things not a problem. Again, that’s its raison d'etre.

Wow. I unfortunately remember openssl, libpng, ssh, Linux bugs. Exchange bugs. Windows bugs. Fucking Android bugs. I’ve even had to patch qmail. Literally nobody seems perfect enough to get it right, so do I assume that we’re still waiting for that well-written library?

While many people have implement oauth2, there are always different frameworks being created and things get re-implemented, whether correctly or not. Oauth2 is more complicated than it probably needs to be, which is friction, which has long term effects on security. Making the primitives even simpler would probably go along way in the long run.

I am saying something different than you are responding to. I am saying that if something should be ubiquitous it needs to be ridiculously simple to implement because people are always generating new code, new frameworks, new languages, new ways components have to interact. Nice libraries come later.

I really don’t think that’s true. The x86 ISA is notoriously baroque, but it’s nevertheless ubiquitous. I don’t think any one person understands the CSS standard. But it doesn’t matter, it’s just easy enough to paper over these things.

I think x86 and CSS are poor counter examples. They are both quite far down in the stack. Few people are implementing their own web browsers or their own ISAs. For x86, one simply needs to build more to gain ubiquity. I am claiming that applications are at the leaves (or near to it) of the developer stack and applications are written in a very wide range of technologies. Almost all web applications need some kind of log in page. Being ubiquitous at the leaves requires being very simple and easily implemented. I might be wrong, but that matches up with my experience (for oauth2, at least).

However, one doesn’t have to implement the entire x86 ISA ecosystem in order to use x86. You can use just the parts you know, or have implemented, and ignore the tricky bits. Because it has an opcode for everything, and supports nearly every IO and memory access pattern possible, pretty much everyone can use x86 in some way. That is what makes x86 popular, and it is easy to see complexity actually helped make it popular.

CSS is similar: If I want to make my text red, I only have to learn some CSS. If I want to implement CSS, I do not need to implement all of CSS. CSS allows me to absorb its complexity incrementally, and it is easy to see how this complexity actually helped make it popular.

However OAUTH2 is not like x86 or like CSS. It is complex, and you have to get it exactly right, and it takes a lot of code to get it exactly right, and that code is different for each provider. It is complicated in a way that makes it difficult to use correctly so most people use it incorrectly.

However OAUTH2 is not like x86 or like CSS. It is complex, and you have to get it exactly right, and it takes a lot of code to get it exactly right, and that code is different for each provider. It is complicated in a way that makes it difficult to use correctly so most people use it incorrectly.

But people don’t usually need to work with every provider. They only need to work with one provider, or maybe a small handful of providers. If I just want to use Facebook authentication for my site, it’s a couple of lines and I don’t worry about the wider standard, much like doing something in x86 or CSS.

Or should we say that authentication is hard? Google and Microsoft and Facebook seem willing to deal with these things, and monitor for suspicious activity, and they’re popular enough that the user is likely to have a better relationship with them, and yet they want us to use OAUTH2 which is really complicated and difficult

I understood this to mean you were asking whether we should delegate authentication to Google/Microsoft/Facebook via oauth2; was that not what you meant?

I understood this to mean you were asking whether we should delegate authentication to Google/Microsoft/Facebook via oauth2; was that not what you meant?

No. I am saying Google, Microsoft, and Facebook claim that password management is too hard for stupid programmers. That is why they used a protocol that promotes lock-in (OAUTH2) rather than actually solving the Real Problem:

You see, the Real Problem is that we are holding some service for the user, and we want to protect it for them by making themselves authenticate the session access to that service. That means we should delegating password management to someone the user trusts, not someone we trust.

I don’t entirely disagree with the sentiment, but he doesn’t really offer any alternatives. In almost every case, security comes at the cost of usability.

Besides, people have to know what they’re doing to some extent, even if software security is perfect. There’s always the argument that most people don’t understand how a car works, but still know how to drive a car, so why do they have to know how computers security works in order to use a computer? The difference between a car and a computer is that a car isn’t used for the majority of banking, secure communication and secure storage.

I think a better analogy is: people have learned how to use doors, but they’ve yet to learn to use locks. Sure, they don’t have to fully understand the mechanics of locks to use them, but they still have to know to lock their door. It worries me that most people still don’t understand what “the cloud” is or know that email is an insecure form of communication.

I’m going to only focus on one point here which is the aspect of passwords/logins. One of the wonderful things about corporate environments is there desire to focus on single sign on solutions. It would be wonderful if someone was working to take that to the internet as a whole. This is actually one of the reasons I was so excited for Mozilla’s Persona which seems like an ideal solution in a number of ways but with some obvious downsides as well. One of the most important issues facing the internet is decentralized identity. Large technology companies are already trying to own this space. This identity should carry between websites and a user should be able to have multiple ones for privacy and security reasons. Security would be far better off if we made users only remember a couple passwords for an identity and then used those identities to login into services rather than insisting that we create an account for every service.