If I had to name a company which has a product that is relatively simple in technical terms a search engine would certainly not be on top of my list. It seems relatively obvious to me that creating a good search engine is a tough achievement. Sure, creating some kind of search engine is probably easy - but then you end up with another Bing, not another Google.

Bing is actually quite good; it’s probably only about 3-4 years behind Google & you’ll recall that Google was still pretty damn good 4 years ago. DDG may be a better example of an 80% search engine ;-p

I have a few problems with this. The short summary of these claims is “APT checks signatures, therefore downloads for APT don’t need to be HTTPS”.

The whole argument relies on the idea that APT is the only client that will ever download content from these hosts. This is however not true. Packages can be manually downloaded from packages.debian.org and they reference the same insecure mirrors. At the very least Debian should make sure that there are a few HTTPS mirrors that they use for the direct download links.

Furthermore Debian also provides ISO downloads over the same HTTP mirrors, which are also not automatically checked. While they can theoretically be checked with PGP signatures it is wishful thinking to assume everyone will do that.

Finally the chapter about CAs and TLS is - sorry - baseless fearmongering. Yeah, there are problems with CAs, but deducing from that that “HTTPS provides little-to-no protection against a targeted attack on your distribution’s mirror network” is, to put it mildly, nonsense. Compromising a CA is not trivial and due to CT it’s almost certain that such an attempt will be uncovered later. The CA ecosystem has improved a lot in recent years, please update your views accordingly.

Furthermore Debian also provides ISO downloads over the same HTTP mirrors, which are also not automatically checked. While they can theoretically be checked with PGP signatures it is wishful thinking to assume everyone will do that.

Related: I always had the idea that one could create a reverse-robots.txt-search engine (aka it indexes only stuff robots.txt tells it shouldn’t index). Not aware of anyone ever did it, but it probably would be interesting.

I’d expect that plenty of people mistakenly think robots.txt is a security mechanism.

Unfortunately (DMCA, etc) I would assume this would likely be considered a criminal act, and Americans or US entities anywhere in the world may be found to be committing crimes by even linking to the results of such an engine.

I asked the apache team twice for a disclosure date and planned release. After that I set my own disclosure date, however with an offer that we could still agree on a coordinated disclosure, but only if they set a defined date, not for some undefined date in the future. That prompted a reply, but only that they now prepare a release, but still couldn’t define a date for it. So I went ahead, informed distros and one week later made it public.

I think I’m aware of the intention of libtls. But intentions are irrelevant.

I think my requirement was stated clearly: It should be available on common Linux distributions. Aka “I want to do [packagemanagement installcommand] openntpd and get the feature”. I don’t think that’s the case right now.

If that would give me a wrapper of libtls around OpenSSL I’d be happy to change my opinion.

I’m sure it already works on Linux though? At least on Arch and Alpine, OpenNTPD is included in official packages. Some distro I’ve installed recently — IIRC it was Alpine — asked me right in the installer whether I wanted ntpd, chrony or openntpd.

You can install OpenNTPd on nearly all Linux distributions, however, nearly all of them lack constrains support because it depends on LibreSSL’s libtls. Thus, Hanno has a valid point here. I’d love to see a “usable” version of LIbreSSL on Linux.

They could generate a new cert for every device they sell, each for their “gateway” hostname, so that there isn’t one global key for all Linksys routers. You’re still always going to have to have a private key on the router, though.

Some places have no guaranteed Internet access. On some of them it is even prohibited (e.g. industrial applications) and considered a vulnerability.

The typical procurement/delivery/service subcontracting structure in large industrial projects (think oil rigs, power plants, mines, tunnels) makes maintaining your own DNS/CA ineffective and impractical. It requires getting everyone, from e.g. government subcontractors drafting requirements for the bidding round through foreign contractors hired 4 levels down in the bigcorp management hierarchy to actual device vendors to understand, implement and maintain this infrastructure. Typically it’s thousands of configurable devices sourced from dozens of vendors, with vastly different configuration/provisioning implementations - and most of them don’t have elaborate setup options like say Cisco IOS systems do. How do you provision your cert to an arbitrary vendor’s PLC or industrial endpoint switch? What if you have two thousands of them? The setup would take longer than configuring them in the first place, and further maintenance (these certs should expire, don’t they?) is a nightmare. So inevitably, solutions for these projects converge to lowest common denominator, which is more often than not is simple airgapped L3/L2 network split into a bunch of subnets and no trust chain whatsoever.

Honestly I wish that people talking about crypto topics would stop using the “grains of sand” and such visualizations. I think it doesn’t necessarily reflect the concepts of scale very well as it doesn’t take into account how much of that “sand” we can process. It doesn’t really matter if there are more “pieces of sand than in the whole world” if I have buckets that can move all that “sand” in a month. I see no mention of Shor’s in here either, and ignoring the quantum situation seems like a mistake, and I highly suggest reading up on Post-quantum RSA.

Hmm, they didn’t just count CVEs; according to the slides, they did a three-month audit of BSDs and then made conclusions based on the found bugs. So, although close, it’s not exactly “vulnerability statistics”.

A set of requirements, good design, implementation, and strong verification of each by independent parties. It’s what was in the first, security certifications. The resulting systems were highly resistant to hackers. At B3 or A1 level, that usually showed during the first pentests where evaluators would find very little or nothing in terms of vulnerabilities.

That’s a great presentation despite deficiencies I’ll overlook. Especially on the relationship between what vulnerability researchers focus on and what the CVE lists show. A good example of this I’ve been discussing in another thread is OpenVMS. It lives up to its legendary reliability as far as I can tell so far but I learned that its security was an actual legend: mix of myth and reality. The reality was better architecture for security than its competitors back in the day, attention to quality in implementation, and low CVE’s in practice with famous DEFCON result. I figured what actually was happening is most hackers didn’t care about it or just couldn’t get their hands on the expensive system (same with IBM mainframe/minicomputers). I predicted they’d find significant vulnerabilities in it which happened at a later DEFCON. So, nice work, highly reliable, and not as secure as advertised by far. ;)

Another good example to remember is Linux kernel. I slam it on vulnerabilities but that’s because they (esp Linus) don’t seem to care that much. The vulnerability count itself is heavily biased due to its popularity like Windows once was before Lipner of high-assurance security implemented Security, Development Lifecycle. I’ll especially note the effect of CompSci and vendors of verification/validation tools. They love hitting Linux since it’s a widely-used codebase with open code. Almost every time I see a new tool in static analysis, fuzz testing, or whatever they apply it to Linux kernel or major programs in Linux ecosystem. They find new stuff inevitably since the code wasn’t designed for security or simplicity like OpenBSD or similar project. So, there’s more to report just because there’s more eyeballs and analysis in practice instead of just in “many eyeballs” theory. Same amount of attention applied to other projects might have found similar amount of vulnerabilities, more, less, or who knows what.

As of 2005, writer Barbara Blackburn was the fastest alphanumerical English language typist in the world, according to The Guinness Book of World Records. Using the Dvorak Simplified Keyboard, she maintained 150 wpm for 50 minutes, and 170 wpm for shorter periods. Her top speed was 212 wpm.

What’s the approach to disclosure when a website is notified about a breach like this? All bets are off as to what information has been leaked, so I suppose you wouldn’t be able to guarantee that your users’ data hasn’t been compromised.

This is a challenge you always have after you learned about a severe security vulnerability. What have you done after heartbleed? Or any other bug that has the potential to leak information?

To be honest: In this case I’d say just fix it and move on. If you have logs you can check whether someone else has tried to access that file. (To the best of my knowledge this issue wasn’t discussed publicly before my blogpost and article today. However I don’t know if others knew it and used it for attacks.)

One annoying feature of fail2ban is its ability to automatically send abuse emails. I get these emails all the time because of my Tor exits, and because of their automated nature they have no way of telling that the server is a Tor node and therefore I’m not the right person to contact.

The emails will accomplish nothing, if you use fail2ban please make sure you’re not sending them.

Actually without a proper SPF and/or DKIM signature there is a 99.999% that all of those emails are marked as spam anyway and ends up never actually getting to the admin. I think you greatly overestimate how useful abuse emails are, I think I’ve gotten 1 response ever. Also tor isn’t generating the emails, the automatic abuse emails from the users are, “don’t use it” isn’t particularly useful advice.

You simply assume that I don’t know what TOR is and that I didn’t run a TOR exit node for 5 years. I don’t really believe for a second that the vast majority of users configure their fail2ban servers to use a DKIM proxy or put SPF records for any fail2ban server. Of all the abuse emails I have gotten not a single one in my trash has a DKIM signature. I also have never gotten a response back from any of the senders.

My point was if you’re going to operate a TOR node then don’t whine about getting emails telling you your TOR node is being used for illegal purposes.

The thing with abuse mails is that they take time to respond to. Sometimes, a lot of time. I am fine with this, if the senders of the abusemail have taken the time to find my email address and send it to me. If the emails are automated, however, they often go to the wrong person (a human would see the rDNS of my node, and realise that sending abuse emails to me is a waste of time) and can be sent far faster than an admin can deal with them.

According to many laws you could actually be held liable for any attacks launched from your open proxy.

As far as I’m aware, my country has no such laws. It would be an absurd idea, and make running any ISP illegal.

And a special niche has been carved out just for TOR node operators,

I don’t think this is true at all. Many ISPs do not want Tor, and others tolerate it as long as bills are paid and emails answered. I have only ever encountered one ISP who don’t even bother forwarding abuse emails to me, and I don’t count that as ‘special treatment’ or anything because, simply put, there is nothing for me to do about the abuse emails. Whoever’s sending them is either unaware of the Tor network (i.e, unqualified to send abuse emails), or a robot.

If it weren’t saving people’s lives from murderous dictators I guarantee node operators would be treated far more harshly

So you’re saying that if my completely legal server isn’t used by people suffering under oppressive reigimes, you don’t believe in my right to keep it online, relatively harassment-free? That’s not a nice precedent to set…

You’re fishing for trouble here. I think my words were clear, and I don’t appreciate you trying to apply a different intention to them.

I just poked all my SysAdmin friends on IRC and they all agree that they have never gotten a legitimate abuse email that wasn’t DMCA. Again, this is uselessly aggressive, doesn’t help the conversation, and was not entirely clear.

I never suggested it did!! What I’m saying is that your emails are really not accomplishing anything except wasting peoples’ time. Yours, because you set them up, my ISP because they send the emails on to me, and mine, because I answer them.

TOUGH! As an administrator of a network on the internet YOU ARE RESPONSIBLE FOR ALL TRAFFIC THAT ORIGINATES FROM YOUR NETWORK. Do your job and stop whining about it.

If everyone thought this way, I’d get so many abuse emails that it would be totally impractical. You’d shut down half the nodes in the Tor network you claim to love overnight.

What country is that?

UK, well, Scotland really.

Yeah I bet.

I explained fully what I meant in the post. Please read it.

You’re fishing for trouble here. I think my words were clear, and I don’t appreciate you trying to apply a different intention to them.

No, I don’t think your words were clear at all. Can you elaborate please? You seem to be suggesting that reasonable and humane treatment is subject to circumstance.

On the contrary, many of the networks that are notified do take action.

Can you show anything to support this claim? Comments made by poptart et al seem to suggest otherwise.

You should probably double check the laws.

Ugh, well, the legal situation doesn’t seem totally clear, but there is nothing to suggest that I should be liable, and past court rulings seem to support this. If you can find a British or Scottish law I’m violating, I’ll shut down my nodes and your poor servers can finally have peace :-)

In the US there is no legal precedent set, which means that there is the risk of being made an example of, but it also means that it isn’t as black and white as simba makes it seem. Many Universities across the world run exit nodes and I’m fairly certain that they know a bit better than us, it would be interesting to have one of them chime with their thoughts.

I don’t have documents to show it, but there’s lots of people who have reported their internet access being disabled over the years because their systems are infected by malware.

That’s mostly ISP’s shutting down clients that are part of botnets and are done by seeing certain types of traffic. There might be another router operator involved to pin down where the traffic is coming from, but these are not blocked by people reporting abuse. Especially in this case most of the time they are DDoS bots doing amplification attacks, no amount of abuse emails will help since they are almost always UDP IP spoofing.

there is plenty of legal precident, just not at the individual scale. Data centers get threatened a few times a year about sites like Pirate Bay operating on their IPs and the result is they usually cave in to the demands of governments and law enforcement agencies, because they know the “Wasn’t us” defense won’t work in court.

I worked at a University for 2 years as the only security technician, I heard this exact argument 5 times. Legal precedent is an actual legal term for a case that establishes a principle, give me the evidence in this case, because every EFF staffer (i.e. lawyer) I have ever talked to says there is none.

I don’t have “proof” because my friends tend to be pretty savvy so none of them have been infected with those kinds of things.

This thinking is going to get them in trouble, bullet-proof thinking makes a person arrogant and will often end up shooting the person who thinks that in the foot. I’ve had to do forensics on fully patched systems with strict SELinux rules that had extremely complex rootkits installed that the only reason we discovered their compromise was because of the amount of traffic being sent.

When all the evidence shows you as the origination IP and you don’t have any logs to prove that someone else connected to you, that’s a pretty tough defense to make.

I’ve got years of email logs and communication with the ISP in which I explain that the server is used as a Tor exit, and the data available from onionoo which will show that the reported ‘exit bandwidth’ according to the Tor network matches the traffic produced by the server.

The fact is you could disable your open proxy and then you’d no longer be contributing to the problem.

Woah, woah, woah. Are you seriously suggesting that Tor admins should just.. stop?! Do you realise how insane that sounds? Earlier in the thread you explained how you support Tor as it protects the vulnerable from tyrants. Tor is not a ‘problem’, the problem is criminal activity on the internet. The link I posted earlier explains how Tor doesn’t really boost the abuse, and you’ve no reason to suggest that it does apart from a few isolated incidents (where the criminals could have used other networks were Tor not available), so I think it’s a bit premature to call Tor a ‘problem’!

Data centers get threatened a few times a year about sites like Pirate Bay operating on their IPs and the result is they usually cave in to the demands of governments and law enforcement agencies, because they know the “Wasn’t us” defense won’t work in court.

No, that’s not why they cave. They cave because they have a legal requirement to shut down servers which are hosting copyright-infringing content. Tor exits do not host any content; they just relay it, so this does not apply to them.

Just today I’ve received emails from 4 ISPs regarding hackers using their networks, all of them positive and stating they will contact their customers as the next step.

Yeah, well, while you’re obviously pretty convinced that these emails work, I have my doubts.

I’ve received a lot of form letter responses from TOR node operators too, most of them are friendly in tone and they simply explain what a TOR node is and then they say they can’t do anything about it.

Maybe we’ve spoken already then :-)

I don’t whine about getting those emails.

Obviously not, because you sent the initial email suggesting that you expect a reply!!

If you don’t want to get notified, don’t allow people to use your server as an anonymous proxy or relay.

I don’t object to notifications at all! I happily reply to emails I get. I do, however, dislike getting automated emails for pointless things like ssh logins.

Yes, fail2ban is great, and I run it. That said, address the root cause, not the symptom and get passwords away from your SSH auth. Moving to certificate based authentication moves the threat down to vulnerabilities in your SSH demon or the cryptography of your certificate. Those are much, much easier threats for you to control for than ‘strength’ of passwords.

If you consider fail2ban a security tool you’re doing it wrong. If your web app has vulnerabilities then fix them. This is hardly a good security strategy, because many attackers spread their attack attempts throughout botnets.

fail2ban is valuable, as it can help you lock out some attackers earlier and thus save resources (because an SSH login attempt takes more computing power than an iptables drop), but you really shouldn’t rely on it for security.

So I have to make an admission: I would’ve failed the health bar test. I’m not a very active gamer, but I also feel I’m no complete stranger to video games and have played quite a few.

I mean I would’ve probably figured out which bar is the health bar after playing the game for 2 minutes. But it isn’t immediately obvious to me that the red bar is the health bar and the blue bar is not.

So I have no clue how film festivals work, but reading this text I’m wondering why this isn’t flat out illegal.

If I understand this right the way this works is that you pay a fee to submit your movie to a festival. So you’re basically paying them to watch and evaluate it. If they simply don’t do that - how’s that not considered a criminal scam?

Big wow. You’d think with all the hysteria about AMT backdoors somebody would think to try the most obvious backdoor imaginable. “Does an empty password allow login?” All this time and nobody checked that. I mean, this is something I do for all sorts of random websites.

First big company to top the Mac Server weakness where it would take any password for admin on one of the services. Code was probably like If NotEmpty (password) Then AccessGranted (). Intel taking it up a notch helping their partners in crime. NOBUS my ass…

This is all good and correct, but it fails to mention a major other reason why STS is a good idea (and as far as I know one of the reasons it got invented in the first place): It prevents many instances of “SSL Stripping” attacks.

When one visits a site via HTTP first and then gets redirected to HTTPS then an attacker can intercept this connection, prevent the redirect and instead serve a malicious version of a page over HTTP. HSTS preloading completely prevents this attack and normal HSTS reduces it to the very first connection to a site.