Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

EastDakota writes "CloudFlare was originally conceived by the team behind the open source community. Project Honey Pot as an easy way to protect any website from hackers and spammers. The concern from the beginning was that it would add latency. It was quite a surprise when the free service launched 8 months ago and ended up speeding up websites by 60%."

The entire article is incoherent. At the point we reach this particular garbled sentence, we have no idea what Prince's relationship is with Project Honey Pot. And we never hear anything further about DHS. We don't find out how exactly CloudFlare "makes sites faster," and I have no reason to believe it does that, or anything else useful.

According to the article, the speed boost comes from two things: 1) CloudFlare sniffs your content and inline replaces sections of it with equivalent content all served via the same connection... so the speedup comes from only having to use a single connection to get the entire page and 2) They are a globally distributed content system with 12 global data centers, similar to Akamai but smaller in scale, allowing content to come from a location closer to the end user.

It only helps versus a traditional CDN in the cost arena and versus some CDNs in that it provides protection against spam and hacking.

The service works by you pointing your nameserver records to them, then use them as a dns provider. client -> dns lookup -> cloudflare Colo A

Cloudflare acts as a reverse proxy cdn by replacing some dns records with their IPs instead of yours, so unless you tell them to not use their servers for a particular record the host is sent them, they check the hosts IP, if it

I worked for a now defunct company called Netli that did something like this. Not caching exactly, but putting proxies on either end of the path which would optimize TCP behavior. The speedups could be quite significant especially where latencies were high (long fat pipes), because your browser normally spends a lot of time waiting for entire round trips to occur as each new connection is opened and ramped up to speed. You can also "prefetch" content because you can determine which images the client will be

From what I've dug up, there are several sources of potential speedup:1) Acts as a CDN (with 5 data centers - 3 US, 1 Europe, 1 Asia) to cache static files (such as images, js, css) from a location on average nearer to most visitors, plus the cache servers are fast and well-connected. I read a claim somewhere that based on total traffic going through their system, they would be the 10th busiest site on the web (unverified).2) Filters out enough "bad" traffic, which it never sends on to the site's originatin

You know, this is in part your fault. If you'd written a more interesting blog post than this and submitted it, it might have been posted instead of this article. Instead, you didn't and they posted this.

They offer a security product for websites, and in the process of designing it so that it didn't add much latency, they inadvertently made it into a CDN that speeds things up. There. Now we all know what the trick is.

Don't forget, it's bound to mess up your logs. Connections aren't coming from the user any more, they're coming from the CDN. Good luck doing your own filtering server-side from there. If they're caching parts of it, that means you have no prayer of seeing that request. You might get a Via header for some requests, but the cached requests? There won't even be a hit back to your server.

I'd consider using them for a few things I do, but there are some problems. I don't kn

Seriously? For your own filtering use X-Forwarded-For (built in to apache) or mod_cloudflare. Logs and filtering are not an issue unless you are incompetent. Cloudflare also only caches static content such as css and images, so there is still a hit for the main request page that you can see in logs and filter against. As for security, use ssl. Sure, they have a solution for ssl too, but you can easily add a record and not run it through their system at all such as secure.website.com. If you are running your

It uses Javascript to obfuscate email addresses. That is helpful but not foolproof, contrary to the article. It stops most harvesters, at the cost of no-script users and the like. The chirpy article is less than trustworthy, so I would not assume the service is a CDN, or if it does cache that it will continue to maintain capacity. Or the speedup, if real, could be due to minifying html and serving small images in the Google News way, as inline data. The number of connections can be more important than speed

CloudFlare is touted for intercepting and altering HTML to and from client sites. Isn't this a Bad Thing? Passwords, PII, etc. all being captured, inspected, possibly altered, and sent along. What a lovely way to capture and control information. And it's spread across 12 datacenters (and growing) so who knows how many copies of your SSN there are across CF. But at least it allows IT admins to not have to care or think about customer data security.

it isnt a CDN per-se its a DNS proxy that caches a static page of your site when it goes down, the data from this is insignificant than when every users loads an image.
see How can CloudFlare afford to offer a free CDN? [cloudflare.com]

Which is interesting in that the response starts with "We built our network from the ground up for a single purpose: making anywebsite faster and safer".

Which seems to stand in stark contrast to the premise of the article, which is that they didn't intend to make web sites faster. So which is it?

Further, I think that even if it prevents spam, it likely only delays it. In the article there is a quote that says: "“We challenged an engineer on our staff to sniff a packet of data to see if there was an

The smartest crawlers out there don't just regex over the source any more, they have javascript engines baked in, some even rope in the rendering engines from an opensource browser so they can get a look at the finished product.

While they can certainly protect a site from various threats better than the average programmer (XSS etc.), the downside is that all login and personal information also goes through their site, enabling them (or a rogue government) to collect it. Also, their concept is great for launching targeted attacks at specific users, i.e. sending them tailored content like trojans (of course such attacks by rogue governments are feasible without CF, but harder). The question is: should they be trusted more than your own employees and your ISP? Right now, here in Europe, I'd say: for important stuff, no.
That said, here's an idea for a useful "app": automated A/B-testing for your site (build 2 versions of your website and let them decide who sees what, combine with Google Analytics or other stats => see which version works better for your users).

I know, thanks... But it requires 2 things to be handled by your web pages which CF could do more elegantly: a) put the Analytics JS on all pages, b) decide which version (A or B) to show a visitor and why (i.e. set a cookie so he still sees the same version when he comes back and all that) and modify the Analytics code accordingly. Putting that on the CF end would mean that even inexperienced people could set up 2 versions of their web site easily and benefit from Analytics A/B testing features (think wor