Ξ welcome to cryptostorm's member forums ~ you don't have to be a cryptostorm member to post here Ξ∞ take a peek at our legendary cryptostorm_is twitter feed if you're into that kind of thing ∞Ξ we're rolling out voodoo network security across cryptostorm - big things happening, indeed! ΞΞ any OpenVPN configs found on the forum are likely outdated. For the latest, visit GitHub ΞΞ We've updated our CA certificate. All members need to be using the latest ones by Dec 22. See this page for more infoΞ

{note: this is a pre-launch development & discussion thread in which we brainstormed features & technical architecture with the member community; it's been locked - no additional replies enabled - in order to encourage future discussions take place in standalone threads ~admin}

For more than a little while, we've had a plan to expand the availability of our core secure network service via a slimmed-down version. By now, such things have come to be called "freemium" offerings so rather than buck the tide unnecessarily we'll just concede to that as a decent description, and go forward from there

But as we've poked and prodded at the "freemium cryptostorm service," there's been consensus on the team on a set of core requirements that are non-negotiable for us:

~ identical crypto suite and security hardening as the "full service" version~ no annoying roadblocks to genuine use, like caps on total throughput or max hours per day or whatnot~ no ads... NO ADS! Because, seriously... ~ no gimmicks, no needless complexity, no arbitrary compromises

With those basic starting points, our tech team has been slowly building up a deployment framework for this offering... and we also realised that calling it "this offering" or some such clunky name would get pretty old, pretty fast. So we grabbed a domain that seemed sort of neat: cryptofree.me (which, for now, just recursively points back to this thread... so don't get your hopes up!)

The cryptofree.me service will be based on a tweaked version of our existing network connection widget. That widget will support bandwidth-capped no-cost connectivity to the full cryptostorm network. The only difference between cryptofree.me and "full" cryptostorm network sessions is the bandwidth cap: current talk is to cap at 56kbps up/down... but that's still under discussion so we'll see how it plays out during beta testing.

The business-case on the whole thing (which matters if we're to keep the lights on and the dev folks fed & housed - which is important!) is pretty simple: we're confident that folks who use cryptofree.me will see how well the network does its job, and that xx% of those folks will decide to "upgrade" to a full cryptostorm session by purchasing a token. As to what that number "xx" is, we'll just have to see how it pans out in testing.

Also: the number of concurrent cryptofree.me sessions will be capped on a per-instance basis. This ensures that each session has access to sufficient network capacity to be useful, and that instances don't get "session flooded" bu a bunch of folks all at once... sort of a mini "tragedy of the commons" situation that we think is best avoided. So the constraint will be how many machines we can allocate to cryptofree.me sessions - which in turn will be driven essentially by the in-the-wild economics of token purchases by cryptofree.me network members.

As we get ready to alpha test the tweaked cryptofree.me widget, wei'll post details here - we're not announcing it yet on the cryptostorm.is website, as this is certainly more of a development-stage project & it seems better to house it here in the forum where members can participate more actively in the process. To that end, please do feel free to post in this thread suggestions, concerns, ideas, brainstorms, &c so we can leverage the best thinking of our community during the build-out process.

Despite the long gestation period this project has had - nearly five years since we first started kicking around the idea if we're going to be honest about it! - cryptofree.me is something we're really excited about. Mostly, the idea of dramatically expanding the pool of folks who can benefit from full-bore cryptostorm network security is what we like about it. We also like the "unmarketing" concept of allowing the service to sell itself, via upgrades from cryptofree.me to full cstorm membership: that's the sort of "marketing" we like best, and this aligns well both philosophically and architecturally.

Finally, we thought about establishing an announcement email list or something to let folks know when alpha cryptofree.me widgets are ready for testing, but that seems fiddly and not really necessary, eh? So instead, just keep an eye here or in our twitter feed & word will spread once we're ready for some community testing!

Thanks in advance for the feedback and guidance that will help make cryptofree.me a big part of our network's future

I think this got mentioned in IRC a while back by one of your staff. I'll be honest, I have my reservations. I applaud your model of providing infrastucture and then letting others sell access to it, it's an excellent move not only in terms of security but also that over time selling tokens would be something that the core team doesn't need to worry about. Long may this model continue.

Now the questions/points

1. FreeVPN = Honeypot. Yes, all of the others are "ad-supported" and shit-slow. Yes, people like free as in beer. However, look at what you're up against. Technically you guys have proved yourselves, but you don't (seem to) have the marketing drive of people like (for example) VyprVPN or PIA. People simply don't know CryptoStorm, so when they see a free VPN come along from an outfit they have never heard of, what's the first thing they will think?

2. I think your efforts would be better spent on marketing the existing service, especially making a play of the fact that access is logless and anonymous, and doesn't have the hassle of using gift cards (which aren't available everwhere). This is a big selling point and you should be capitalising on it. Additionally, flesh out and formalise your reseller program and have a dedicated page for it to bring resellers on board. To my mind this would be worth more in pulling in users.

3. 56Kb/s is definitely too slow. I don't know what you've budgeted for, but EDGE speeds (384Kb/s) would be far more usable for messaging.

Really convinced cryptofree.me to be an attractor for the 'full blown' package. The stated non-negotiable core requirements reflect what CS is all about, so there will be no proverbial skeleton in the cupboard when people decide to subcribe.

As a tweaked widget will be responsible for capping, and the source of the official widget is public, the risk of widget reversing/fiddling should be anticipated (or already has been) upon to avoid uncapped connections while using cryptofree.me.

Fully supportive ... if you need support on alpha, beta or other semantic version levels of testing, please feel free to throw the ball ...

As much as I have reservations, I too will be happy to help test things out. Here's a question: if the widget is responsible for rate limiting, how will you support Linux, OS X and other non-Windows users?

Would it not be better for the freemium widget to connect to a specific set of instances which are rate-limited server-side? Or is this in fact what will be happening? Also, is it safe to assume that the freemium widget will have a token hard coded into it, or will there be special tokens minted?

Perhaps it's a misinterpretation of me (good thing because it bubbles up questions/remarks) to assume that capping would happen by means of the tweaked widget ... . If capping will be handled server side (which is of course the preferred way), there's no restriction regarding OS flavour.Sure CS team will clarify this.It's not impossible to use the current token scheme/principle, this could be done by adding an extra field to the table containing (precious) information on tokens, indicating if this token needs to be capped or not. Of course there are several solutions possible for this, depending to which extent (technical) separation is needed between the two services.

Possibly the most elegant solution would be to use virtualisation. Per physical node, have one VM with OpenVPN instances and a token database for paid service users like what we have now, and a separate VM with instances and token database for freemium users. Freemium tokens in one group of replicated databases, premium tokens in the existing group of replicated databases. This would be much cleaner than trying to build rate limit information into the token data and then trying to pass that data to a traffic manager.

The issue I see with this is the permitted bandwidth. It needs to be fast enough to be usable, and slow enough to not tempt people to torrent over it.

This is indeed a proper approach. System architects can also decide to have two openvpn daemons run on a single machine, each 'connected' to a dedicated DB instance, ... . Different ways to solve this, all with pros and cons.Wrt. to the speed, I cannot imagine what is would be like to be on 56kbps again, it has been a while. Taking into account that we are dealing with richer content, ... .alpha and beta tests will drive this figure I guess ... .Sensitive exercise determining the speed and bending it into a teaser for more ... .

Interesting article...pretty much the experience of mobile users in rural and/or heavily congested areas. One issue with such a slow service is that as much as it is free and anonymous, so is Tor...and Tor is usable even across a 3G connection. The selling points of CryptoStorm is that it is secure, structurally anonymous and fast.

As for the exit node architecture, "too many databases spoil the broth". MongoDB has replication issues, so the fewer instances involved, the better.

It seems ridiculous now, but there was a time when you watched the clock when you were online. The early days of the internet seem archaic now – a single Acorn Archimedes computer at my school was able to go online – but in that age before Google we just didn't know any different. Using the internet actually seemed a special, rare privilege. And you went on for a purpose.

Usually it was for research, but the age of mass information was a fledgling idea and the internet was pretty sparse. The BBC website, for example, started in 1997, but you could only find out very basic information. And the idea of the web as a place for news was hardly existent.

As it's TechRadar's Speed Week, the powers that be decided I should spend a day using a modem and document how I got on. The main question I wanted to answer was whether today's internet would work on it.

When I told my father that I'd be spending a whole day going back to using a modem, he said it would be "painful." That's coming from someone who hardly uses a computer. Of course he was right.

I looked on a couple of forums, including one on Money Saving Expert, to see if people in general were still using dial-up. The responses? Actually surprising. This was typical: "Quite a few people around here (Mid Wales) have to use dial-up. Broadband from the local exchanges is rationed to a fixed number of connections and phone/dongle coverage is very patchy."

So although such people are in the distinct minority, it was worth bearing in mind that my experience would be akin to how some people in the UK have to use the internet.

Some of the other, flippant recollections of dial-up from the forums are also worth mentioning:

"My ex husband is still on dial-up. Yet more proof he's neanderthal man (not that I needed any mind, it's obvious he's from the dark ages as soon as he opens his mouth)."

"I spent many a night trying to muffle the modem when connecting late at night when my parents were in bed."

"I remember trying to look at porn on dial-up and it taking ages for the picture to load."

"I might just as well be on dial-up in the evenings, my Virgin broadband is that slow!"

So with those wonderful recollections in mind, I had to decide how I was going to get online. I do own laptops old enough to have modem sockets in, but they are pretty creaky, so I decided I'd procure a new USB modem and use it with my Windows 7 laptop.

I contacted US Robotics who duly sent me a USR5637 56K USB Fax Modem. That's right, you can also use it to send a fax – does anybody send faxes anymore?Oh, and in case you're wondering (you probably weren't) you can use this modem on Mac OS X and Linux as well as Windows. I duly installed the drivers and connected up my modem, but then I didn't really know what to do next.

Then I was a bit stuck. I'd completely forgotten how to create a new internet connection in Windows. This hasn't changed a lot since Windows 95 or 98 and in Windows 7 you get to it via the Set up a new connection or network link in the Network and Sharing Center.Dialling up

NOT WIRELESS?: Choosing how to connect to the net

I selected dial-up with a heavy heart, after which I set about entering my ISP details. There are still shedloads of numbers available, and a quick Google on my phone showed me a bundle of cheap dial-up details. All you need is the number, username and password. I clicked Connect.

Dialling up!

READY: This box has hardly changed since Windows 95

All was quiet, there was none of the kerrrrchsssss noise that you used to get with older serial modems. It seemed like it wasn't working and then, suddenly it was there. First a message appeared from my Livedrive backup software to say the connection to their servers had been restored – my uploads were quickly paused so my PC didn't try and squeeze a batch of MP3s down the phone line.

Then Dropbox kicked in and tried to upload the screengrabs and text I had already written for this feature. It's slow enough when you try and do that on mobile broadband, but this was excruciating. It was apparently happening at 686kbps, but Dropbox was obviously lying to me. I was actually achieving speeds of around 25-30kbps using my modem.

Skype logged in without issues, though it didn't connect a video call when I tried it – somewhat understandably – and you'd struggle to even make a Skype voice call on dial-up. Windows Live Messenger didn't even bother to log in automatically.

So I went through the usual services I check every morning. First Twitter – I started TweetDeck. The columns looked to be refreshing for absolutely ages and took over a minute to appear. At the same time (more fool me) I tried to load Facebook – which didn't load at all. TweetDeck then loaded a solitary tweet while still attempting to refresh the other two columns.

TweetDeck

FREEZE: TweetDeck was more like a lame duck on dial-up

Giving up, I decided to look at Twitter on the web. Unfortunately the website didn't even work properly. Loading Twitter.com was staggeringly slow and it didn't even to bother log me in automatically as it does usually. Either this is some security thing as I'm using a different connection on this PC, or I'm pretty second class as a dial-up user.

Twitter

WHO: Twitter refused to remember me on my second-class connection

I left Twitter open and decided to do some work. I often use Google Docs but this time decided to work offline. When I tried to access an – admittedly large – spreadsheet in Google Docs, loading was very slow. But I wasn't surprised – if there's a web app dial-up wasn't designed for, it's Google Docs.However, it's not all doom and gloom; Gmail wasn't too bad and loaded fine on the simple HTML view for slow connections.

One of the main problems I had with dial-up while trying to do work is that I use Google all the time to look up various stats and other information. I was surprised that Google searches took an age to appear – Google Instant didn't work, while non-text search results like the images and videos didn't really appear! I was surprised that Google doesn't seem to adapt for slower connections as I thought it might – aside from the lack of Google Instant the page looked identical.

The most painful thing was that at various points it seemed like I was slowing to a total crawl, so I had to disconnect and reconnect.After hours, I looked at Facebook. The service works OK on dial-up – but only if you're patient. It loads pretty sluggishly and the Top News column expands constantly as you start to browse it, because new elements are still loading.

The norm for some

It's no wonder we all used to rely so much on magazine cover CDs for programs to install; downloads are obviously super slow on dial-up and can take many, many hours. Something as bloated as Apple's iTunes takes around 8-10 hours to drip through your connection.

But there are some sites that work extremely well on dial-up – the BBC text-only or mobile sites have all the same great information and the plethora of sites specially adapted for the iPad, such as http://touch.facebook.com, are great examples of sites that are great on a dial-up connection.

So, by the end of my day, I'd actually got rather used to being on a slow connection. That's not to say I really enjoyed it of course – at times it was extremely difficult. It's just that I was able to adjust what I was doing. Instead of listening to stuff on Spotify or Last.fm I just used iTunes. Instead of looking at Facebook several times a day I just looked at it once. And downloading files? I didn't bother doing that at all.

But, of course, this was a single day for me. A lot of people have no choice. The Government's 2009 Digital Britain report said that "Up to 10 per cent of homes are still in not-spots, not-a-lot spots or not-at-all good spots" for broadband. A sobering thought for those of us so used to fast access.

There's been some discussions about whether to rate-limit server-side, via instance architecture, or client-side via tweaked widget setup. I'd say this is an open question at this point, and that various team contributors have divergent opinions on the right way to do it. The feedback so far in this thread echoes much of that internal debate, in terms of opening cryptofree.me up to non-windows OSes via a server-side setup, and so forth.

One extra feature I hope to see in the production architecture is a "fallback to cryptofree.me" option for widget-based network members. That is, if one is using a paid-for widget and that widget expires, there's a fairly smooth failover where the session will drop down to a cryptofree, limited-bandwidth session until a new token is available. To me, that helps ensure ongoing protection for members even if/when tokens expire, and is thus a security improvement.

But I'm somewhat of a second-order observer in all this, so don't take what I am saying as canonical, or anything close to it!

As far as I am aware, the widget invokes the OpenVPN binary anyway, so in order to rate limit via the widget, some kind of rate limiting parameter would have to be passed to the OpenVPN executable. Not only that, but changing the bandwidth value would be much quicker and easier handled server side, rather than trying to update multiple copies of a widget.

That is, if one is using a paid-for widget and that widget expires, there's a fairly smooth failover where the session will drop down to a cryptofree, limited-bandwidth session until a new token is available. To me, that helps ensure ongoing protection for members even if/when tokens expire, and is thus a security improvement.

Which sounds like a hard-coded token is needed, or at least one which is downloaded along with some configuration data (i.e. which freemium instances to connect to) in the event that a token has expired or is not yet available (a bit like the international emergency operator, 112 - doesn't need a SIM to be present. )

Guys, I think cryptofree.me is a fantastic concept, and will make promoting the service much easier.

Understand the safety requirement once tokens expire, and offering access to activists; but as #parityboy mentions, free vpn acquires the tag of honeypot. Would you want scriptkiddies migrating their bedroom Ops from Tor, to a node near you? ... or did I miss something and would cryptofree.me give a restricted, as well as throttled access?

Thinking on the "fallback to free" mode you suggested, I think it could be made to work quite smoothly. I think the best way to do it would be open up the tokenizer tool using a web API so that upon connecting (or startup), the widget queries the tokenizer tool to see if the token is valid. If so, downloads the node list for the paid-for service. The user is then free to connect to one of the nodes.

If there is no token or the token is invalid, it downloads the "freemium" node list, and connects with a "freemium" token or a default set of credentials. I think the freemium node list should consist of:windows-free-balancer.cryptostorm.net - for Windows users, obviously.raw-free-balancer.cryptostorm.net - for Linux, OS X and everyone else.

The ability to select a specific node should be restricted to the paid-for service.

Freemium TokensThe interesting part will be your token model. If the node architecture I suggested previously is adopted - or something similar which uses a separate group of token databases and OpenVPN instances - then obviously the freemium tokens can be placed in these databases. The next obvious questions are:

1. Will tokens be available via resellers a la the current sales model, or limited to the core team only?2. How long do freemium tokens last? 7 days? 14 days? 31 days? Forever?3. If forever, will you remove the restriction for freemium nodes so that multiple devices can connect to the same node? This way, you only need to mint one token and have it publicly available for people to use. It also means that the token can serve as an effective emergency backup for when paid tokens expire.

Freemium Tokens And Frontline ActivismLet's say that freemium service is governed by tokens a la the current access model, and let's say that the freemium tokens are time restricted a la the current access model. Assuming it's intended to be a backup service (as well as a taster), let's pretend it has a 7-day validity. If an activist's paid for token runs out and they activate the freemium token (which they always keep in their back pocket), the clock on that token starts ticking.

Let's say there's a 48 hour dead space between getting a new paid token and using the freemium one. Once they get their new paid-for token, they set aside the freemium one. However, that freemium token will run out 7 days after activation anyway, which means the activist will always need to keep a freemium token on tap, just in case.

So, what to do? Would the CS team consider moving to an hourly model (specifically for freemium tokens), where a token's TTL is based around the actual number of hours connected, such that a 7 day token would count up/down from/to 168 hours of actual connection time?

I wouldn't make the freemium token length too short as it will put extra strain on the organisation, unless these tokens are automatically generated. If using eternal tokens or tokens with long validity, tokens will become dormant (users buying tokens, users not using the connection anymore, ...) without traceability. A 3 or 6 months lifetime seems suitable in this case I believe.This reasonable lifespan would also support the 'freemium as a backup' idea (of course bad lock if both expire in the same period).

As been mentioned already in previous threads, carefully choosing bandwidth is very important in order to avoid cannibalization on business and technical level.

I couldn't see any way this would work by doing the capping from the widget's side. As someone said above, that would only work on Windows. Since the widget is basically just a front-end to OpenVPN, it wouldn't be hard to just use OpenVPN manually to bypass it. Plus doing anything like that client-side is just plain dumb, never a good idea to trust stuff like that.

My idea was to use something like the linux `tc` command to do bandwidth limits per IP. Right now on the server-side configs we use an external script that verifies the token by checking it against the MongoDB backend running on that specific node. If the token's in there and it's valid/activated, the script exits with a status of "0". If the token doesn't exist or is invalid, the script exits with "1" (i.e., 0 for good, 1 for bad).

That sounds a lot smoother than any of my suggestions. :p Question: if a client supplies no token and no password, will that result in an AUTH_FAIL or will OpenVPN simply hang/keel over/panic?

@Fermi

Based on what df has suggested, we won't need to worry about tokens at all, which is a good thing. With that in mind, the critical differentiator will indeed be the allocated bandwidth. Anything considered to be too slow to torrent over is a good start. :p

Df's idea using tc to limit bandwidth (doing capping client side is indeed not the preferred way as it would be fairly easy to tamper with) when a token is invalid or doesn't exist (or even dummy or no input is giving), is indeed a very good approach .It avoids the hassle creating freemium tokens, provides a permanent fallback scenario for members. Even with this setup statistics are possible through end nodes, because this will be needed to balance and maintain the QoS between freemium and membership connections.

I like that you guys are going the Muji route for marketing. Being someone who has studied marketing extensively I can tell you that a company with an amazing product with no marketing can sometimes fail.. but at the same time, a company that spends all it's resources on marketing instead of focusing on it's product will also fail. Personally I'd rather have a VPN that focussed on quality as opposed to quick-profit. A strong institution lasts in the long run as opposed to marketing focus that is often too sensitive to small changes in the environment. Keep up the good work!

@parityboy "if a client supplies no token and no password, will that result in an AUTH_FAIL or will OpenVPN simply hang/keel over/panic?"

There has to be a token and a password, just won't matter what they are.

If you try to remove the auth-user-pass bit from the client conf you'll see:"Options error: No client-side authentication method is specified. You must use either --cert/--key, --pkcs12, or --auth-user-pass"

If you try to empty out client.dat (the file the widget uses to store the token hash + password), you'll see:"Mon Oct 06 21:27:56 2014 us=473462 Error reading username and password (must be on two consecutive lines) from Auth authfile: client.dat"

I figured the easiest thing to do is to give a separate free widget. Have it use the exact same code as the paid one, except that the free one is preloaded with a [locally] randomly generated token that fits the token syntax. Maybe some different text thrown in too, "click here for faster speeds" or whatever. Would be cool if we could have it where once bought, that randomly generated token turns into a paid account. As long as my usual insane amounts of input validation is done, I don't see any security concerns there.

Oh and on that suggested bandwidth cap, I agree that it's definitely got to be something too slow to torrent with. From the copyright complaint emails I see, I imagine most CS clients are doing that with this service. So something slower than that, but fast enough that it's not going to take 5 minutes to send an email/IM/whatever.

parityboy wrote:As far as I am aware, the widget invokes the OpenVPN binary anyway, so in order to rate limit via the widget, some kind of rate limiting parameter would have to be passed to the OpenVPN executable. Not only that, but changing the bandwidth value would be much quicker and easier handled server side, rather than trying to update multiple copies of a widget.

Quite a few of the smart folks I know are all echoing what you're saying here, and it sounds like server-side is very much the way to go. That's one problem solved!

Virtualising stuff is always tempting, because there's so much flexibility inherent in those models. However... big however here: however, there's genuine security considerations in virtualised environments - particularly those that are network-intensive. SDN as a field is so young and loose around the edges (by definition, pushing packets around in a virtualised framework is software-defined-networking, whether it wants to be labelled as that or not) - and security considerations are rarely top of list when it comes to deployed SDN. Not yet; someday, but not yet.

So, with the exception of non-infrastructure uses (hosting websites, etc.), we tend to shy away from virtualised setups on production hardware. Frankly, the fewer layers between the application and the metal, the better - in my grizzled, and humble, opinion. If I could get rid of BOIS images entirely in our network, I would. Maybe someday we'll figure out a way to do that. Each layer adds complexity, attack surface volume, and avenues for unexpected behaviours - all the ingredients of security fail.

tl;dr is that we'll almost certainly throw bare metal at this project, and scale it with more bare metal. Which does simplify things from a certain perspective.

Thanks so much, on behalf of the team, for the contributions thus far - it's really helping to focus our architectural efforts into the most promising avenues.

Oh and on that suggested bandwidth cap, I agree that it's definitely got to be something too slow to torrent with. From the copyright complaint emails I see, I imagine most CS clients are doing that with this service. So something slower than that, but fast enough that it's not going to take 5 minutes to send an email/IM/whatever.

Having thought about it, I reckon anything between 384Kb/s and 512Kb/s would be OK. My own connection's throughput is 8Mb/s down/1Mbit/s up which is fast enough to torrent over in either direction.

I figured the easiest thing to do is to give a separate free widget.

What about from an end user perspective? It might be better to have one widget, however have a button that says "Click here for trial service", which generates a random token (or downloads one) which of course will fail and present restricted service to the user. I think it's better from a user experience perspective to have a single widget which handles both cases, rather than two separate downloads.

@PJ

I remember reading one of the earlier posts on the forum wher the team described a Xen-based SDN setup. Obviously it was scrapped: was security the main reason?

parityboy wrote:Having thought about it, I reckon anything between 384Kb/s and 512Kb/s would be OK. My own connection's throughput is 8Mb/s down/1Mbit/s up which is fast enough to torrent over in either direction.

Not so long ago I was using a 512Kb/s down / 128Kb/s up connection...torrents worked fine, although admittedly not rapid. At the same time if I exceeded my monthly data cap I'd be slowed to 64Kb/s up & down...this wasn't as bad as it may seem as the cap was dynamic, it was allowed to burst up to 512Kb/s but the average speed over say 60 seconds would be 64K. This made general web browsing and email with its "bursty" (is that even a word?) traffic work pretty much as usual but P2P dribbled along at 64K. In effect it meant that there wasn't a speed cap as such but instead a throughput cap.

If the idea of the freemium service is to provide web/email but not run gigabytes of torrents perhaps a similar approach could work? Not a network geek so NFI how hard it would be to implement.

parityboy wrote:@I remember reading one of the earlier posts on the forum wher the team described a Xen-based SDN setup. Obviously it was scrapped: was security the main reason?

With SDN as it currently exists, security is always an issue - the recent Xen memory-hopping vuln is an example of that.

However, with intensive administration we felt we could deploy Xen securely in our network configuration; this is based, in part, on years of previous experience using it in such setups.

What resulted in us de-Xenning our network side infrastructure was something else entirely: network throughput performance.

There likely is someone, somewhere in the world who knows how to tune Xen's virtual NICs (dom0 -- domU) in such a way that they can shovel large quantities of SNATted packets across without choking on (virtual) ring buffers. We never found here, and we spent months searching. Nor did we manage to get good results based on our own in-house tuning efforts. We could get "standard VPN" performance no problem (which is not surprising, as 90%+ of "VPN companies" are running cheap-assed VPS machines... which are just virtualised containers sitting on physical hardware): a megabit or two up and down, bad pingtimes, erratic latency, packet loss, mishandled high-volume UDP sessions, etc.

That doesn't work for us.

Our per tuning on bare-metal systems has, after a year of on-and-off intensive effort on the part of myself and others on the network admin team, resulted in routine reports of 50megabit/sec or higher throughput (both up and down) across multiple nodes in our network. And we're still pushing that performance envelope wider, on a weekly basis, with continuous perf-tuning improvements from the kernel forward.

With Xen in the mix, we simply couldn't get the same level of results as reliably. So it was stripped out.

Just to clarify (and to help keep clear): the cap levels being discussed above, are we talking small-b bit or big-B bytes?

(internally, we try to standardise on b v. B to ensure we don't get these switched during discussions - it's happened, more than once, in the past)

I tend to think in small-b increments as that's often how DCs and wholesale providers describe infrastructure (gigabit pipes, etc.). But application-layer folks often think in big-B bytes since in their world the byte is often the relevant unit of measurement (a 2 teraByte hard disk).

The 8-fold difference between these two units of measure ends up being damned bloody relevant, which one can learn the hard way if one is not careful.

Also: we've got a bare-metal machine leased and provisioned specifically for cryptofree.me testing - likely the alpha rev will involve "raw" connection profiles via tweaked conf's - because this is less complex for us to iteratively test and tune than are the widget-based connects that will arrive later.

I'd expect us to have ports open to public alpha testing traffic on that machine sometime this week, if all holds as planned. We've still a bit of kernel massaging to do, but the HAF entries are done and (most) of the ovpn tweaking is ready for session integrity testing.

parityboy wrote:Which is why 64Kb/s sounds way to low. 8KB/s is unusable in this day and age.

Oh, agreed!

So if we synchronise on small-b bit metrics, what's the consensus on a good cap?

(the tools we're currently testing are more comfortable with a hard cap on per-session capacity, versus the sort of sliding-scale approaches discussed in some interesting posts above... which is not to say that a sliding-scale approach might not be implementable in the future)

I still say 384Kb/s is a good start. However, how I see it is like this: this network is a darknet which was set up with the preservation of security, privacy and anonymity as its main goal. Going by what I have read on these forums, CS was built to protect people who literally have their lives on the line. So the question is: how would you describe that demographic in terms of data consumption? What do they use CS for? YouTube? E-mail? Usenet? Forums? IRC/Jabber/whatever-IM-you can-think-of? How rich is the data they consume?

BitTorrent and YouTube will be the biggest data hogs on a typical Internet connection, so maybe it's a case of supporting everything except those uses.

On a philosophical basis, we always cover all packets and all network traffic - to do anything less is the beginning of a slippery slide down the road of DPI and selective routing. For, to make any effort towards segregating certain network usage activities from others, one must do some sort of analysis to determine which packet is which... and this is anathema to us, as a team and as a project.

That's a "bright line" we have, and have had since the earliest days of our work together back in 2007. I really do not expect to see that ever change - it's what makes us who we are, as a project.

There's other tools that will do a pretty damned good job of protecting certain packets from surveillance... which is sometimes useful. The crying need we continue to see is for tools that provide behind-the-scenes, ubiquitous, consistent coverage of all network traffic emanating from a specific machine (or LAN).

parityboy wrote:BitTorrent and YouTube will be the biggest data hogs on a typical Internet connection, so maybe it's a case of supporting everything except those uses.

When said "support", I certainly wasn't thinking of DPI. I really was saying to restrict bandwidth to a level such that the use of BitTorrent and other high-bandwidth applications would be unpalletable. My other questions were really asking if the team had an inkling of the kind of applications using the network. I very much doubt IRC sessions generate 20Mb/s of traffic.

Is restricting bandwidth the way to go? CS is a VPN with kickass online privacy measures implemented, just like their final draft - launch release spells out. It doesn't mention that CS exists to provide additional bandwidth. One of the cryptostorm.is floating screens mentions "unlimited use", not unlimited bandwidth. Wouldn't any capping done by cryptofree essentially be capping the users' ISP downstream?

The whole idea of CS is to provide anonymity. If a CS user goes over their ISP's allocated download limit, then I highly doubt that said user would still be able to pump out normal speeds while connected to the darknet after their ISP implements the bandwidth cap. If I look in my ISP's user account box and check download amounts, I can see how much I have downloaded. Well... I might be way off base with what I just said because of what I just saw. At the moment, I am 21GB over my peak period limit, and still rocking. I didn't even check last months usage (14GB over), nor did I receive a letter or even a speed reduction...

I cannot comment how this will change, if at all... between widget user and raw user (or eventually a CS router if all goes well)... Basically, what I am interested in finding out is who's bandwidth is actually capped, the user or the darknet... DesuStrike would get a headache reading this post... lol

I somehow managed to miss your reply concerning Xen. It sounds as if your performance issues had little to do with decryption performance and much more to do with the packet handling itself. From my own perspective, I have generally seen Xen lag behind in many tests, when put up against the likes of KVM, OpenVZ and LXC - the latter two are as close to bare metal as you're likely to get.

My suspicion is that full virtualisation environments like KVM, Xen and VMware ESXi simply aren't tuned for the kind of task that CS is asking of them; most of the enviroments they are deployed to are multi-tiered web applications where the bottleneck is storage IOPS, rather than packet handling.

Just out of interest, did you ever try passing the NIC(s) directly to the VM, rather than using vNICs?