cryptostorm's community forum

Ξ welcome to cryptostorm's member forums ~ you don't have to be a cryptostorm member to post here Ξ
Ξ any OpenVPN configs found on the forum are likely outdated. For the latest, visit here or GitHub Ξ
Ξ If you're looking for tutorials/guides, check out the new https://cryptostorm.is/#section6 Ξ

Hi, i am a paying customer. But it's not that the problem. I know that TS isn't gone.
I am just saying that before, when it was enabled by default, i used a CS dns, simply and only, and i had all ads blocked without vpn. This is so useful on iphone for example like written in previous post.
I used also a CS dns on my osx (with dnscrypto) and all ads were blocked without install adblock, adguard and similar...

[quote="df"]@jasonbourneTS isn't gone, it's just disabled by default now. https://cryptostorm.is/tsIf you're not a paying customer, you can also use it when connected to our free service - https://cryptostorm.is/cryptofree[/quote]

Hi, i am a paying customer. But it's not that the problem. I know that TS isn't gone. I am just saying that before, when it was enabled by default, i used a CS dns, simply and only, and i had all ads blocked without vpn. This is so useful on iphone for example like written in previous post.I used also a CS dns on my osx (with dnscrypto) and all ads were blocked without install adblock, adguard and similar... :(

@jasonbourneTS isn't gone, it's just disabled by default now. https://cryptostorm.is/tsIf you're not a paying customer, you can also use it when connected to our free service - https://cryptostorm.is/cryptofree

So to use TS, just remove that # character from the beginning of "#dhcp-option DNS 10.31.33.7".

A new widget version will be released soon that does the same thing from the widget's Options window.

One of the main reasons for disabling it by default is that a lot of people are using our public DNS servers when not connected to cryptostorm. Some of those people aren't paying attention to this forum or our twitter account, so most of them don't even know about TrackerSmacker. They just assume our DNS servers don't work when something doesn't resolve on ours, but it does resolve correctly using a different public DNS server.
So now only CS clients connected to the network can use it. It's setup on Cryptofree too so non-paying people can still use it if they want.

Another reason is that a lot of people seem to prefer to do DNS-based ad blocking themselves on their own router.
With this setup, those people don't have to do anything extra.

TS is now disabled by default.See the note at very bottom of the client configs on GitHub or https://cryptostorm.is/configs/ -

So to use TS, just remove that # character from the beginning of "#dhcp-option DNS 10.31.33.7".

A new widget version will be released soon that does the same thing from the widget's Options window.

One of the main reasons for disabling it by default is that a lot of people are using our public DNS servers when not connected to cryptostorm. Some of those people aren't paying attention to this forum or our twitter account, so most of them don't even know about TrackerSmacker. They just assume our DNS servers don't work when something doesn't resolve on ours, but it does resolve correctly using a different public DNS server.So now only CS clients connected to the network can use it. It's setup on Cryptofree too so non-paying people can still use it if they want.

Another reason is that a lot of people seem to prefer to do DNS-based ad blocking themselves on their own router.With this setup, those people don't have to do anything extra.

For widget users, you can add the above two lines to your C:\Program Files (x86)\Cryptostorm Client\user\custom.conf file.
Just keep in mind that we're not responsible for any potential DNS leaks if you decide to use your own DNS settings.
Be sure to verify that you're not leaking after you connect with a site like https://dnsleaktest.com/ or https://ipleak.net/

For widget users, you can add the above two lines to your C:\Program Files (x86)\Cryptostorm Client\user\custom.conf file.Just keep in mind that we're not responsible for any potential DNS leaks if you decide to use your own DNS settings.Be sure to verify that you're not leaking after you connect with a site like https://dnsleaktest.com/ or https://ipleak.net/

Tealc wrote:So.... after a little troubleshooting with fermi on IRC I removed OpenVPN program and Windows TAP driver and installed everything once again (i deleted the app data also and all folders) and now it works, it's once again that problem with the tun/tap driver in windows 10, after some updates it gets broken for some reason (actually I made a topic about this and now I didn't think it was that)

I also ran into that on a test/dev win10 box I've been working with... and just assumed it was something I broke, tbh (I break stuff - especially Windows stuff... what can I say, it's a gift). I didn't know it was "A Thing."

That it doesn't manifest until a few cycles in is... unsettling. To my old-fashioned mind, anyhow. Probably it has to do with amazing new code polymorphism and docker containers and agile development and stuff (which is to say: things I'm tool slow to understand fully). Also I blame Plesk for it, just because.

Anyhow, glad that particular hiccup is resolved.

Cheers!

(ps: it wasn't TrackerSmacker - woot!)

[quote="Tealc"]So.... after a little troubleshooting with fermi on IRC I removed OpenVPN program and Windows TAP driver and installed everything once again (i deleted the app data also and all folders) and now it works, it's once again that problem with the tun/tap driver in windows 10, after some updates it gets broken for some reason :-D (actually I made a topic about this and now I didn't think it was that)[/quote]

I also ran into that on a test/dev win10 box I've been working with... and just assumed it was something I broke, tbh (I break stuff - especially Windows stuff... what can I say, it's a gift). I didn't know it was "A Thing."

That it doesn't manifest until a few cycles in is... unsettling. To my old-fashioned mind, anyhow. Probably it has to do with amazing new code polymorphism and docker containers and agile development and stuff (which is to say: things I'm tool slow to understand fully). Also I blame Plesk for it, just because.

So.... after a little troubleshooting with fermi on IRC I removed OpenVPN program and Windows TAP driver and installed everything once again (i deleted the app data also and all folders) and now it works, it's once again that problem with the tun/tap driver in windows 10, after some updates it gets broken for some reason (actually I made a topic about this and now I didn't think it was that)

So everything is fine, and working ok,

Stay safe

So.... after a little troubleshooting with fermi on IRC I removed OpenVPN program and Windows TAP driver and installed everything once again (i deleted the app data also and all folders) and now it works, it's once again that problem with the tun/tap driver in windows 10, after some updates it gets broken for some reason :-D (actually I made a topic about this and now I didn't think it was that)

Just a little off-topic, how are we with the dns leak issue? Because I was using the "block-outside-dns" in my ovpn conf files and it worked just fine, after this TrackerSmacker implementation I can no longer browse the internet if I leave that option, I can't resolve dns names. So I was kind of obligated to remove that option from the conf file, and when I did that I get this:

Has you can see it doesn't show the Portugal deepdns, it only queries my own dnscrypt server.

Still I cannot guarantee that this will have anything to do with that TS stuff
I sure hope you're not mad at me for being a little crazy about this freedom of view stuff

[b]@df[/b]

Just a little off-topic, how are we with the dns leak issue? Because I was using the "block-outside-dns" in my ovpn conf files and it worked just fine, after this TrackerSmacker implementation I can no longer browse the internet if I leave that option, I can't resolve dns names. So I was kind of obligated to remove that option from the conf file, and when I did that I get this:[attachment=0]dnsleak_portugal.jpg[/attachment]Has you can see it doesn't show the Portugal deepdns, it only queries my own dnscrypt server.

Still I cannot guarantee that this will have anything to do with that TS stuff :-DI sure hope you're not mad at me for being a little crazy about this freedom of view stuff

I'll tell you a harm. About 2-3 weeks ago my menopausal partner's shit android solitaire app stopped allowing her to compare her scores with similar users of said app. She waved that damn tablet at me, as she does when something is not working, also as the kids do, and expected me to do my magic....but my magic didn't work. I couldn't work out why...clearly my deductive powers are lacking. I informed her of this...oh man. You try dealing with the look I got. It's like the last bit of desire she had for wafted away in that moment. Fixing the internet is about all she needs me for, man, and that look is almost worse than the fact she barely wants to fuck me now....

She doesn't care about being tracked on that thing. It's a single purpose shit-app tablet that even in the face of my protestations she's accepted a Faustian pact with. She wants a free game that works is all, and doesn't give a fuck 'they' now know she plays some other shit apps and reads the app of a local paper...that for 3 weeks didn't show any images.

I know, I know...wtf. Evil tracking cunts. Still, no fucks are given on her part and it's not for us to reason why.

parityboy wrote:edit...

So, the Cryptostorm team has gone this far to protect our identities and browsing habits from (potentially) malicious actors. Is the filtering of potentially malicious ad traffic now a step too far?

I understand precisely where you're coming from. I'm like you - freedom is everything and I cannot stand the idea of being tracked, spied upon or censored. However, in this specific case I cannot see the harm in filtering out something that does very little good (if any at all) and yet can cause enormous amounts of harm.

I'll tell you [i]a[/i] harm. About 2-3 weeks ago my menopausal partner's shit android solitaire app stopped allowing her to compare her scores with similar users of said app. She waved that damn tablet at me, as she does when something is not working, also as the kids do, and expected me to do my magic....but my magic didn't work. I couldn't work out why...clearly my deductive powers are lacking. I informed her of this...oh man. You try dealing with the look I got. It's like the last bit of desire she had for wafted away in that moment. Fixing the internet is about all she needs me for, man, and that look is almost worse than the fact she barely wants to fuck me now.... ;) :D

She doesn't care about being tracked on that thing. It's a single purpose shit-app tablet that even in the face of my protestations she's accepted a Faustian pact with. She wants a free game that works is all, and doesn't give a fuck 'they' now know she plays some other shit apps and reads the app of a local paper...that for 3 weeks didn't show any images.

I know, I know...wtf. Evil tracking cunts. Still, no fucks are given on her part and it's not for us to reason why.

[quote="parityboy"][b]edit...

So, the Cryptostorm team has gone this far to protect our identities and browsing habits from (potentially) malicious actors. Is the filtering of potentially malicious [url=http://blog.trendmicro.com/trendlabs-security-intelligence/malvertising-campaign-in-us-leads-to-angler-exploit-kitbedep/]ad traffic[/url] now a step too far?

I understand precisely where you're coming from. I'm like you - freedom is [i]everything[/i] and I cannot stand the idea of being tracked, spied upon or censored. However, in this specific case I cannot see the harm in filtering out something that does very little good (if any at all) and yet can cause enormous amounts of harm. :)[/quote]

@LoveTheStorm it's not a concern. I was half-joking with the possibility of non-CS IPs being disclosed. Even if I did go forward with the plan mentioned in the post you're referring to, I can ensure you that I would never allow any client IP, CS or otherwise to be embedded in anything that would result in their identification.

EDIT: gimme a few minutes while I forward your IP to the FBI... or maybe i'm just waiting for my burrito to finish in the microwave. (AGAIN, JOKING!!!!)

CS (yes, me too) would never divulge anything about any client. The structure of CS was designed in a certain way for a reason. If Mr. Agent of whatever agency put a gun to my head and said "Gimmme the real IP of user xxx", I would be unable to oblige, even if I wanted to.

We (staff) trust our own judgement, but we don't expect customers to do so in such extreme situations. That is why anonymity exists in the CS structure. The basic idea of every part of the CS framework is: We don't know you, We don't want to know you, We can't possibly know you.

EDITv2: But we like you for being paranoid enough to know that you need a service like CS

@LoveTheStorm it's not a concern. I was half-joking with the possibility of non-CS IPs being disclosed. Even if I did go forward with the plan mentioned in the post you're referring to, I can ensure you that I would never allow any client IP, CS or otherwise to be embedded in anything that would result in their identification.

EDIT: gimme a few minutes while I forward your IP to the FBI... or maybe i'm just waiting for my burrito to finish in the microwave. (AGAIN, JOKING!!!!)

CS (yes, me too) would never divulge anything about any client. The structure of CS was designed in a certain way for a reason. If Mr. Agent of whatever agency put a gun to my head and said "Gimmme the real IP of user xxx", I would be unable to oblige, even if I wanted to.

We (staff) trust our own judgement, but we don't expect customers to do so in such extreme situations. That is why anonymity exists in the CS structure. The basic idea of every part of the CS framework is: We don't know you, We don't want to know you, We can't possibly know you. :ugeek:

EDITv2: But we like you for being paranoid enough to know that you need a service like CS :-)

For those that don't know, input validation is what is sounds like, validating expected input. It is the basis for all vulnerabilities. All vulnerabilities are technically forms of lack-of input validation.

df tends to solve this problem simply. For the case of this /etc/hosts list, the format expected is "0.0.0.0" <space> followed by some hostname/domain.

These are preliminary (i.e., not in effect yet because there's internal stuff I need to add), but the regex is basic, ^0\.0\.0\.0 means the thing starts with 0.0.0.0.
The second part ([a-zA-Z0-9\.]+) means that only a-z or A-Z or 0-9 or periods are allowed. Actually, it's going to be modified because dashes are allowed in domains too. I will most likely modify it even further to ensure that dashes and period are in the right places (no domain/host should ever end or begin with a dash or period, nor should they have a dash or period next to each other).

I'm not even going to bother with that unicode nonsense that might bypass a-zA-Z because fuck that noise. It's all illicit in my regex book

I've been finding/fixing vulns in everything relating to input for ages, trust me, I know what I'm doing.

Guest: df knows what he's doing with regards to input validation. ;)

For those that don't know, input validation is what is sounds like, validating expected input. It is the basis for all vulnerabilities. All vulnerabilities are technically forms of lack-of input validation.

df tends to solve this problem simply. For the case of this /etc/hosts list, the format expected is "0.0.0.0" <space> followed by some hostname/domain.

These are preliminary (i.e., not in effect yet because there's internal stuff I need to add), but the regex is basic, ^0\.0\.0\.0 means the thing starts with 0.0.0.0.The second part ([a-zA-Z0-9\.]+) means that only a-z or A-Z or 0-9 or periods are allowed. Actually, it's going to be modified because dashes are allowed in domains too. I will most likely modify it even further to ensure that dashes and period are in the right places (no domain/host should ever end or begin with a dash or period, nor should they have a dash or period next to each other).

I'm not even going to bother with that unicode nonsense that might bypass a-zA-Z because fuck that noise. It's all illicit in my regex book :P

I've been finding/fixing vulns in everything relating to input for ages, trust me, I know what I'm doing. :D

This is why I only update my lists quarterly, and manually look through them.

Is there a plain and obvious reason I don't know about that a 3rd party maintained autoupdated host list isn't a security concern?

I read complaints in the github that the list is only updated via http as well...

I raised a concern that hasn't been addressed- Input validation on the list.

Perhaps there is something fundamental I don't understand here, afaiu /etc/hosts can spoof/resolve any host to any ip- commonly used by resolving http://www.unwantedsite.com/resource to 0.0.0.0; but it could also resolve http://www.importantsite.com to clever.malicious.hacker.ip .

This is why I only update my lists quarterly, and manually look through them.

Is there a plain and obvious reason I don't know about that a 3rd party maintained autoupdated host list isn't a security concern?

I read complaints in the github that the list is only updated via http as well...

Yes to all of the above, apart from the conclusion.. and there is it where the philosophies part ways
In my opinion all of the accomplishments are consistent with a free, unrestricted, unmonitored, unhindered access... and this aligns imho with a non-censored, albeit potentially dangerous, dns.

Personally I would love to have access to deepdns w/o blacklisting, even if this requires changes to the openvpn.conf - but I would not trade this against a requirement to use a restricted dns. In this case I would (and have) switched to another dns service, but would ofc continue to use cs.

I am really grateful for all the efforts and achievements, so thank you guys for this, very much appreciated.

[quote="parityboy"][b]@Tealc,@thread[/b]

It's an interesting philosophical debate, isn't it? :)

[/quote]

Yes to all of the above, apart from the conclusion.. and there is it where the philosophies part ways :) In my opinion all of the accomplishments are consistent with a free, unrestricted, unmonitored, unhindered access... and this aligns imho with a non-censored, albeit potentially dangerous, dns.

Personally I would love to have access to deepdns w/o blacklisting, even if this requires changes to the openvpn.conf - but I would not trade this against a requirement to use a restricted dns. In this case I would (and have) switched to another dns service, but would ofc continue to use cs.

I am really grateful for all the efforts and achievements, so [b][i]thank you[/i][/b] guys for this, very much appreciated.

Thanks PJ. I see all new dns in the official list, amazing. FYI there is only one that doesn't work, it's Moldova dns/server i think. I tried all the other, work good, only that seems problematic. Maybe there is a problem, i don't know.

Anyway thank for, Sir

[quote="Pattern_Juggled"][quote="LoveTheStorm"][quote]You know when here in the official list https://download.dnscrypt.org/dnscrypt-proxy/ (dnscrypt-resolvers.csv)will be ALL crypto dns, for example there is not Moldova, etc.., so we can use them all always.[/quote]

[b]Adding to aforementioned to-do list. :-P[/b]

[/quote][/quote]

Thanks PJ. I see all new dns in the official list, amazing. FYI there is only one that doesn't work, it's Moldova dns/server i think. I tried all the other, work good, only that seems problematic. Maybe there is a problem, i don't know.

df wrote:Either one, anything that tries to resolve that. But yea, post-connect. If you did it preconnect then your real IP would be in a temporary iptables rule, and I doubt any cs member would like that. Non-cs members using deepdns would also have their real ip in a rule (if they know to do that to disable TS, if they know about TS), but fuck em for not being on CS

Is this can be dangerous for privacy? Seems yes, i used dnscrypt (cs dns) for All my connections, i mean always like default, not only for cs vpn.

Password method can avoid this Problem?

Thanks

[quote="df"]Either one, anything that tries to resolve that. But yea, post-connect. [glow=red]If you did it preconnect then your real IP would be in a temporary iptables rule, and I doubt any cs member would like that. Non-cs members using deepdns would also have their real ip in a rule[/glow] (if they know to do that to disable TS, if they know about TS), but fuck em for not being on CS :-P[/quote]

Is this can be dangerous for privacy? Seems yes, i used dnscrypt (cs dns) for All my connections, i mean always like default, not only for cs vpn.

On reflection, I'd say the password would be the easiest client-side trigger, since it would only ever have to be set once. It's a lot less hassle than running nslookup every time I connect, and I wouldn't be doing that on a smartphone anyway.

[b]@df[/b]

On reflection, I'd say the password would be the easiest client-side trigger, since it would only ever have to be set once. :) It's a lot less hassle than running [i]nslookup[/i] every time I connect, and I wouldn't be doing that on a smartphone anyway. :P

Actually, if I can figure out a half-decent (super-easy) way to do the opt-out thing, then I might just reverse the effect. What I mean by that is, make TS non-default, as in the client has to specify something in his config in order for it to activate. For widget users, I would set this to default, and for linux ppl maybe add a comment in the github ovpn configs that says "disable this comment to enable TS" etc. etc.

Actually, if I can figure out a half-decent (super-easy) way to do the opt-out thing, then I might just reverse the effect. What I mean by that is, make TS non-default, as in the client has to specify something in his config in order for it to activate. For widget users, I would set this to default, and for linux ppl maybe add a comment in the github ovpn configs that says "disable this comment to enable TS" etc. etc.

Either one, anything that tries to resolve that. But yea, post-connect. If you did it preconnect then your real IP would be in a temporary iptables rule, and I doubt any cs member would like that. Non-cs members using deepdns would also have their real ip in a rule (if they know to do that to disable TS, if they know about TS), but fuck em for not being on CS :-P

Either one, anything that tries to resolve that. But yea, post-connect. If you did it preconnect then your real IP would be in a temporary iptables rule, and I doubt any cs member would like that. Non-cs members using deepdns would also have their real ip in a rule (if they know to do that to disable TS, if they know about TS), but fuck em for not being on CS :-P

It'd work the same for all OSes. You resolve nots.cryptostorm.is, DeepDNS picks this up either via pdns-recursor or curvedns (tho the latter would require src-edits), and it would trigger a server-side script that runs some iptables commands that forwards further DNS requests from the same client IP (10.* for ppl on-cs, real ip for non-CS ppl using deepdns) to the secondary non-TS pdns-recursor instance.

EDIT: keep in mind, these are still just in the "ideas i'm trying out on a dev server". nothing has been implemented yet aside from turning off TS on the london server.

It'd work the same for all OSes. You resolve nots.cryptostorm.is, DeepDNS picks this up either via pdns-recursor or curvedns (tho the latter would require src-edits), and it would trigger a server-side script that runs some iptables commands that forwards further DNS requests from the same client IP (10.* for ppl on-cs, real ip for non-CS ppl using deepdns) to the secondary non-TS pdns-recursor instance.

EDIT: keep in mind, these are still just in the "ideas i'm trying out on a dev server". nothing has been implemented yet aside from turning off TS on the london server.

I'd say that the password method would be the cleanest. That way you could have the opt-out mechanism on every node.

I understand the argument that people want full control over their internet, but you have to understand that the majority of CS members are NOT "power users" that are capable of implementing their own protections. If they were, the widget wouldn't be necessary.

Not only this, but the majority of users are also Windows users, who in turn will be vulnerable to the kind of dangerous mal-AD-y (get it? ) that I linked to earlier in this thread.

[b]@df[/b]

I'd say that the password method would be the cleanest. That way you could have the opt-out mechanism on every node. :)

[quote]I understand the argument that people want full control over their internet, but you have to understand that the majority of CS members are NOT "power users" that are capable of implementing their own protections. If they were, the widget wouldn't be necessary.[/quote]

Not only this, but the majority of users are also Windows users, who in turn will be vulnerable to the kind of dangerous mal-AD-y (get it? :P) that I linked to earlier in this thread.

I'm still trying to figure out the easiest way for clients to opt-out of this feature completely. Not sure if it's even possible as this is a system-wide (/etc/hosts) method and relies on the pdns-recursor used in DeepDNS which doesn't have any type of conditional functions (lua-config-file/lua-dns-script doesn't count ). So it'll take some trickery to pull off, and I might just get frustrated and leave it off on one or two deepdns IPs so people that want to opt-out can use those.

Aside from faster web browsing, I also thought this would be a good idea because the type of ad trackers we're blocking can be used to circumvent the VPN if a client visits a website while not on CS and that website is using one of these trackers, then the client visits the same (or another) website using the same tracker. That could be used to identify you. There's also the myriad of browser exploits that many of these malicious ad servers will use to attempt to gain access to your system, or force your browser to send traffic elsewhere without your knowledge, or god knows what else.

I understand the argument that people want full control over their internet, but you have to understand that the majority of CS members are NOT "power users" that are capable of implementing their own protections. If they were, the widget wouldn't be necessary. Hell, most of them aren't even aware of TrackerSmacker and only notice that web browsing is faster on CS for some reason.

That being said, I do agree that the more savvy users should be able to connect to whatever they want, be it malicious or not. For that reason, I've gone ahead and disabled TrackerSmacker on the London DeepDNS IP (31.24.34.50, turing.deepdns.net) so that people that want to opt-out can specify that as their DNS server. That's just a temporary fix until I can figure the best way to go about a proper "opt-out" (unless that ends up being impossible, then I'll just do my original/lazy idea of turning TS off on a couple of resolvers).

For those interested, the current idea I'm tinkering with involves a secondary pdns-recursor instance (which is the thing that makes TS possible, via "export-etc-hosts=on" and an /etc/hosts populated with ad hosts) that has TS disabled, plus some iptables rules that can be triggered by *something* to forward further DNS requests from that client to the non-TS pdns-recursor instance (one with "export-etc-hosts=off"). That would permit those clients to still use transparent .onion etc. but not be limited by TS. I was thinking a possible trigger for this would be something like the client that wants to opt-out would resolve 'nots.cryptostorm.is', and the DeepDNS framework would find a way to pick this up, which would execute a script that would do some magical iptables stuff to send all further DNS requests to the non-TS pdns-recursor. But that would require some clients to add '--script-security 2 --up /path/to/your/script' to their OpenVPN config and script would be a thing that resolves 'nots.cryptostorm.is'. If I can't figure out how to do that, maybe I can figure out a way to do it so the client only has to specify a specific OpenVPN password ("nots" for example), since we don't really use passwords in the OpenVPN context.

I'm still trying to figure out the easiest way for clients to opt-out of this feature completely. Not sure if it's even possible as this is a system-wide (/etc/hosts) method and relies on the [u][url=https://doc.powerdns.com/md/recursor/]pdns-recursor[/url][/u] used in DeepDNS which doesn't have any type of conditional functions ([u][url=https://doc.powerdns.com/md/recursor/settings/#lua-config-file]lua-config-file[/url][/u]/[u][url=https://doc.powerdns.com/md/recursor/settings/#lua-dns-script]lua-dns-script[/url][/u] doesn't count :-P). So it'll take some trickery to pull off, and I might just get frustrated and leave it off on one or two deepdns IPs so people that want to opt-out can use those.

Aside from faster web browsing, I also thought this would be a good idea because the type of ad trackers we're blocking can be used to circumvent the VPN if a client visits a website while not on CS and that website is using one of these trackers, then the client visits the same (or another) website using the same tracker. That could be used to identify you. There's also the myriad of browser exploits that many of these malicious ad servers will use to attempt to gain access to your system, or force your browser to send traffic elsewhere without your knowledge, or god knows what else.

I understand the argument that people want full control over their internet, but you have to understand that the majority of CS members are NOT "power users" that are capable of implementing their own protections. If they were, the widget wouldn't be necessary. Hell, most of them aren't even aware of TrackerSmacker and only notice that web browsing is faster on CS for some reason.

That being said, I do agree that the more savvy users should be able to connect to whatever they want, be it malicious or not. For that reason, [b]I've gone ahead and disabled TrackerSmacker on the London DeepDNS IP (31.24.34.50, turing.deepdns.net) so that people that want to opt-out can specify that as their DNS server. [/b]That's just a temporary fix until I can figure the best way to go about a proper "opt-out" (unless that ends up being impossible, then I'll just do my original/lazy idea of turning TS off on a couple of resolvers).

For those interested, the current idea I'm tinkering with involves a secondary pdns-recursor instance (which is the thing that makes TS possible, via "export-etc-hosts=on" and an /etc/hosts populated with ad hosts) that has TS disabled, plus some iptables rules that can be triggered by *something* to forward further DNS requests from that client to the non-TS pdns-recursor instance (one with "export-etc-hosts=off"). That would permit those clients to still use transparent .onion etc. but not be limited by TS. I was thinking a possible trigger for this would be something like the client that wants to opt-out would resolve 'nots.cryptostorm.is', and the DeepDNS framework would find a way to pick this up, which would execute a script that would do some magical iptables stuff to send all further DNS requests to the non-TS pdns-recursor. But that would require some clients to add '--script-security 2 --up /path/to/your/script' to their OpenVPN config and script would be a thing that resolves 'nots.cryptostorm.is'. If I can't figure out how to do that, maybe I can figure out a way to do it so the client only has to specify a specific OpenVPN password ("nots" for example), since we don't really use passwords in the OpenVPN context.

Anonymity
Zero logging
Taking heat from copywrong trolls and spineless DCs
Routing our traffic through a small number of IPs
The token system and its attendant resellers
Acceptance of cryptocurrencies

All the while enforcing agnosticity (is that a word?) - generally, our traffic isn't filtered in any way, it certainly isn't metered, there are no restrictions on the sites we visit and we even get transparent access to the Tor and I2P networks.

So, the Cryptostorm team has gone this far to protect our identities and browsing habits from (potentially) malicious actors. Is the filtering of potentially malicious ad traffic now a step too far?

I understand precisely where you're coming from. I'm like you - freedom is everything and I cannot stand the idea of being tracked, spied upon or censored. However, in this specific case I cannot see the harm in filtering out something that does very little good (if any at all) and yet can cause enormous amounts of harm.

[b]Anonymity[/b]Zero loggingTaking heat from copywrong trolls and spineless DCsRouting our traffic through a small number of IPsThe token system and its attendant resellersAcceptance of cryptocurrencies

All the while enforcing agnosticity (is that a word?) - generally, our traffic isn't filtered in any way, it certainly isn't metered, there are no restrictions on the sites we visit and we even get transparent access to the Tor and I2P networks.

So, the Cryptostorm team has gone this far to protect our identities and browsing habits from (potentially) malicious actors. Is the filtering of potentially malicious [url=http://blog.trendmicro.com/trendlabs-security-intelligence/malvertising-campaign-in-us-leads-to-angler-exploit-kitbedep/]ad traffic[/url] now a step too far?

I understand precisely where you're coming from. I'm like you - freedom is [i]everything[/i] and I cannot stand the idea of being tracked, spied upon or censored. However, in this specific case I cannot see the harm in filtering out something that does very little good (if any at all) and yet can cause enormous amounts of harm. :)

I've said it once and I will say it over and over again, censorship is censorship no matter if someone thinks that is for the right reason or not, and has we all agree this is for a good, very good, reason since nobody like adware/malware/cookies/trackers and nowadays the web is full of it.

Still, "My data, my problem", it's my motd! Has I don't agree with my ISP saying what site I can or cannot visit, or with my bank android app telling me that I can't use it with a rooted android phone, it's my choice, my problem, if I can't chose, for example, today see the fu**ing ads and tomorrow don't, that's a limitation of my freedom.

I stand beside everything CS has made and this wonderful VPN product but I cannot agree with this decision.

Stay safe everyone,

Tealc

Monica Miller wrote:

Tealc wrote:May I be the devil advocate?
I believe that CS shouldn't use adware blockers at all.

....
Your's truly,
Tealc

After some thought, I think I have to add my little squeak of dissent to yours. I was using this same list in /etc/hosts on my router where I could edit it at will, and that was suiting me fine. Although being just a tinkerer of things computing, a mere crawling beastie to those further along the evolutionary path, such as the peeps running CS, I still want to make my own judgements and mistakes; not be babysat. I am apparently an adult, and as such I want solid information on which to make my own choices, not have them made for me.

It's all a bit 'knight in shining armour'.

I've said it once and I will say it over and over again, censorship is censorship no matter if someone thinks that is for the right reason or not, and has we all agree this is for a good, very good, reason since nobody like adware/malware/cookies/trackers and nowadays the web is full of it.

Still, "My data, my problem", it's my motd! Has I don't agree with my ISP saying what site I can or cannot visit, or with my bank android app telling me that I can't use it with a rooted android phone, it's my choice, my problem, if I can't chose, for example, today see the fu**ing ads and tomorrow don't, that's a limitation of my freedom.

I stand beside everything CS has made and this wonderful VPN product but I cannot agree with this decision.

Stay safe everyone,

Tealc

[quote="Monica Miller"][quote="Tealc"]May I be the devil advocate? [b][u]I believe that CS shouldn't use adware blockers at all.[/u][/b]

....Your's truly,Tealc[/quote]

After some thought, I think I have to add my little squeak of dissent to yours. I was using this same list in /etc/hosts on my router where I could edit it at will, and that was suiting me fine. Although being just a tinkerer of things computing, a mere crawling beastie to those further along the evolutionary path, such as the peeps running CS, I still want to make my own judgements and mistakes; not be babysat. I am apparently an adult, and as such I want solid information on which to make my own choices, not have them made for me.

Tealc wrote:May I be the devil advocate?
I believe that CS shouldn't use adware blockers at all.

I've already tried this with my openwrt router (couple of years ago), and no matter what list you find out there, we will see a lot of users saying that they can't connected to this or that site, in a nutshell this kind of blockage should be always up to the user & not impose by the CS VPN!

You should maybe have one dnscrypt with adware blocker but never forced by the push request in the openvpn server side conf file.

We can consider this a limitation of my "internet freedom", "My data, my problem", I chose a VPN to be free of my ISP restrictions and EU data retention, so please CS staff consider a new approach on this stuff.

In the long run this kind of stuff will bring more problems them solutions,

Your's truly,
Tealc

After some thought, I think I have to add my little squeak of dissent to yours. I was using this same list in /etc/hosts on my router where I could edit it at will, and that was suiting me fine. Although being just a tinkerer of things computing, a mere crawling beastie to those further along the evolutionary path, such as the peeps running CS, I still want to make my own judgements and mistakes; not be babysat. I am apparently an adult, and as such I want solid information on which to make my own choices, not have them made for me.

It's all a bit 'knight in shining armour'.

[quote="Tealc"]May I be the devil advocate? [b][u]I believe that CS shouldn't use adware blockers at all.[/u][/b]

I've already tried this with my openwrt router (couple of years ago), and no matter what list you find out there, we will see a lot of users saying that they can't connected to this or that site, in a nutshell this kind of blockage should be always up to the user & not impose by the CS VPN!

You should maybe have one dnscrypt with adware blocker but never forced by the push request in the openvpn server side conf file.

We can consider this a limitation of my "internet freedom", "My data, my problem", I chose a VPN to be free of my ISP restrictions and EU data retention, so please CS staff consider a new approach on this stuff.

In the long run this kind of stuff will bring more problems them solutions,

Your's truly,Tealc[/quote]

After some thought, I think I have to add my little squeak of dissent to yours. I was using this same list in /etc/hosts on my router where I could edit it at will, and that was suiting me fine. Although being just a tinkerer of things computing, a mere crawling beastie to those further along the evolutionary path, such as the peeps running CS, I still want to make my own judgements and mistakes; not be babysat. I am apparently an adult, and as such I want solid information on which to make my own choices, not have them made for me.

df wrote:Raka74: I'm not seeing that problem when using Chrome. Granted, I don't speak Dutch, but there's enough English on there to guess which sections are the one's you're talking about.

Thanks for checking.

So this has nothing to do with adware/crapware blocking I'm guessing...

I just tried on my macbook with Firefox and Chromium - it also doesn't work with those browsers.

Additionally I have Cryptostorm/VPN running on my router (Asus/ Merlin) but even when I add the macbook to the rules for routing via the VPN while tunnelblick is disabled on the laptop I still get the same problem - its not working when connected via Cryptostorm.

Really weird.

Should I log a separate topic for this because it's not related to the blocking this topic is about?

[quote="df"]Raka74: I'm not seeing that problem when using Chrome. Granted, I don't speak Dutch, but there's enough English on there to guess which sections are the one's you're talking about.[/quote]

Thanks for checking.

So this has nothing to do with adware/crapware blocking I'm guessing...

I just tried on my macbook with Firefox and Chromium - it also doesn't work with those browsers.

Additionally I have Cryptostorm/VPN running on my router (Asus/ Merlin) but even when I add the macbook to the rules for routing via the VPN while tunnelblick is disabled on the laptop I still get the same problem - its not working when connected via Cryptostorm.

Really weird.

Should I log a separate topic for this because it's not related to the blocking this topic is about?

df wrote:In the mean time, I think the best course of action (for stuff like wtvy.com and v0cdn.net) is a github repo of ours that contains a whitelist. People submit something they need whitelisted, and once staff manually verify that the host isn't evil.com, the server-side scripts automagically update /etc/hosts.

Anyone that would like to help maintain, approve pull requests/merges, etc. - drop a note & we'll make it so. Here's the public announce (as it were).

Cheers

[quote="df"]In the mean time, I think the best course of action (for stuff like wtvy.com and v0cdn.net) is a github repo of ours that contains a whitelist. People submit something they need whitelisted, and once staff manually verify that the host isn't evil.com, the server-side scripts automagically update /etc/hosts.[/quote]

Seconded.

In fact, [url=https://github.com/cryptostorm/cstorm_deepDNS/blob/master/TrackerSmacker/whitelist.txt][b]here we go[/b][/url]:[code]https://github.com/cryptostorm/cstorm_deepDNS/blob/master/TrackerSmacker/whitelist.txt[/code][attachment=0]Screenshot (44).png[/attachment]

Anyone that would like to help maintain, approve pull requests/merges, etc. - drop a note & we'll make it so. Here's the [url=https://twitter.com/cryptostorm_is/status/709893733756436481][b]public announce[/b][/url] (as it were).

Looks like cs1.wpc.v0cdn.net is in the blacklist due to some trackyness (probably by something besides nuget.org).

Added it to the whitelist, should resolve now.
Also whitelisted wtvy.com from that previous post.

This setup uses /etc/hosts, which is system-wide so doing a selective/opt-out feature would mean multiple deepdns instances, which consists of multiple programs tied to an ip. Still haven't figured out the best way to do such a thing, but trying to think of one.

In the mean time, I think the best course of action (for stuff like wtvy.com and v0cdn.net) is a github repo of ours that contains a whitelist. People submit something they need whitelisted, and once staff manually verify that the host isn't evil.com, the server-side scripts automagically update /etc/hosts.

Looks like cs1.wpc.v0cdn.net is in the blacklist due to some trackyness (probably by something besides nuget.org).

Added it to the whitelist, should resolve now.Also whitelisted wtvy.com from that previous post.

This setup uses /etc/hosts, which is system-wide so doing a selective/opt-out feature would mean multiple deepdns instances, which consists of multiple programs tied to an ip. Still haven't figured out the best way to do such a thing, but trying to think of one.

In the mean time, I think the best course of action (for stuff like wtvy.com and v0cdn.net) is a github repo of ours that contains a whitelist. People submit something they need whitelisted, and once staff manually verify that the host isn't evil.com, the server-side scripts automagically update /etc/hosts.

crptomon wrote:This issue only started last week, but has caused me all sorts of headaches having no access and now a week of wasted work time. Consequently I'm not a fan of any blocking feature you may have. Blocking webpages is a show stopper for VPN usefulness if this is the cause.

Gah - apologies for the delayed approval of your post (not sure wtf, but will check). (also I did a bit of formatting cleanup)

On more substantive matters, can't see how what you are reporting is TrackerSmacker-related... but we're taking a look, to be sure. It's certainly not intentional on our part. We'll post analysis results here, shortly.

EDIT: I stand corrected; fixing now:

Screenshot (43).png (7.52 KiB) Viewed 325050 times

Cheers.

[quote="crptomon"]This issue only started last week, but has caused me all sorts of headaches having no access and now a week of wasted work time. Consequently I'm not a fan of any blocking feature you may have. Blocking webpages is a show stopper for VPN usefulness if this is the cause.[/quote]

Gah - apologies for the delayed approval of your post (not sure wtf, but will check). (also I did a bit of formatting cleanup)

On more substantive matters, can't see how what you are reporting is TrackerSmacker-related... but we're taking a look, to be sure. It's certainly not intentional on our part. We'll post analysis results here, shortly.

Without using CS VPN it seems to work fine. Can anyone shed some light on this issue? Consequently I can't use Visual Studio 2015 while this occurs.

This issue only started last week, but has caused me all sorts of headaches having no access and now a week of wasted work time. Consequently I'm not a fan of any blocking feature you may have. Blocking webpages is a show stopper for VPN usefulness if this is the cause.

When accessing this web page:

[url=https://www.nuget.org/][b]nuget.org[/b][/url]

I have no issues. However, if I try to access the [url=https://dist.nuget.org/index.html][b]downloads menu[/b][/url] e.g.

[code]https://dist.nuget.org/index.html[/code]

I get a "server not found error".

Without using CS VPN it seems to work fine. Can anyone shed some light on this issue? Consequently I can't use Visual Studio 2015 while this occurs.

This issue only started last week, but has caused me all sorts of headaches having no access and now a week of wasted work time. Consequently I'm not a fan of any blocking feature you may have. Blocking webpages is a show stopper for VPN usefulness if this is the cause.

Found my first oddball result. A television channel website (wtvy.com) is blocked. A story popped up in social media and when I went to click on it, I couldn't resolve the server. Set DNS to google for a sec and it resolves fine. I wonder why this site is on the blacklist.

Just using as a testing example.

Found my first oddball result. A [url=http://www.wtvy.com][b]television channel website[/b][/url] (wtvy.com) is blocked. A story popped up in social media and when I went to click on it, I couldn't resolve the server. Set DNS to google for a sec and it resolves fine. I wonder why this site is on the blacklist.

twelph wrote:It might just be pertinent to wait it out and see if it actually affects users in the long run. Maybe the list will be maintained well enough that it won't be an issue. He did say that it was enabled for a whole week without anyone even having any trouble, maybe we are making too big of deal out of this?

One of the ways to see if something like this is causing obvious problems is to implement it and see if problems arise (which, of course, we would not ever do in a security-intensive situation or with new cipher primitives, &c.) - on the one hand that seems inelegant and reckless; on the other hand, all the pre-implementation testing in the world doesn't add up to actual results from actual implementation.

The philosophical issues brought up in posts here are solid, and important. I look forward to discussing those - and likely they will result in some sort of dual- or hybrid-option arising from this. Meanwhile, we've implemented TrackerSmacker everywhere in the network because, simply put, it works really well.

This advertising bloatware on so many websites is a serious security issue - and it means we can't (or choose not to) simply ignore it, if there's a way to make real improvements on behalf of members. That has to balance with our packet-agnostic roots... which go back nearly a decade into the project's earliest days. So it's an important, open topic.

I'd say, in terms of my personal (not official team) view of things is that the implementation has been seriously low on problems or complaints from members. The network, to keep things in order-of-magnitude generality, handles thousands of web browsing sessions per day (likely alot more, but that's a super lower boundary)... so if we see, say, a dozen complaints that's likely a really low percentage. (which ignores the very important issue of folks being frustrated and not actually complaining... which is not to be ignored as it's the frustration that is the hidden variable we're seeking to measure, rather than the complaints)

One reason we lean towards production-based testing of things like TrackerSmacker is there's no installed codebase on member machines, to consider: it's all done network-side, and all easily modified by our admin team as-needed. So, for example, if the whole thing proved to be a disaster... we could just rm it from nodes, and everything's back to a clean slate. That kind of flexibility to tinker and fine-tune on-the-fly means the experimental process can move faster, retain time-reversibility, and in general avoid engendering member frustration if an experiment like this ends up being ill-conceived, when all is said and done.

As Graze is wont to say (were he saying it, which he's not... so I'm paraphrasing): put the feature in 'permanent beta' and tune the heck out of it; if it works well, it'll prove itself out - and if not, take the learning from the experience and apply it elsewhere. Well said, Graze!

Cheers.

[quote="twelph"]It might just be pertinent to wait it out and see if it actually affects users in the long run. Maybe the list will be maintained well enough that it won't be an issue. He did say that it was enabled for a whole week without anyone even having any trouble, maybe we are making too big of deal out of this?[/quote]

One of the ways to see if something like this is causing obvious problems is to implement it and see if problems arise (which, of course, we would not ever do in a security-intensive situation or with new cipher primitives, &c.) - on the one hand that seems inelegant and reckless; on the other hand, all the pre-implementation testing in the world doesn't add up to actual results from actual implementation.

The philosophical issues brought up in posts here are solid, and important. I look forward to discussing those - and likely they will result in some sort of dual- or hybrid-option arising from this. Meanwhile, we've implemented TrackerSmacker everywhere in the network because, simply put, it works really well.

This advertising bloatware on so many websites is a serious security issue - and it means we can't (or choose not to) simply ignore it, if there's a way to make real improvements on behalf of members. That has to balance with our packet-agnostic roots... which go back nearly a decade into the project's earliest days. So it's an important, open topic.

I'd say, in terms of my personal (not official team) view of things is that the implementation has been seriously low on problems or complaints from members. The network, to keep things in order-of-magnitude generality, handles thousands of web browsing sessions per day (likely alot more, but that's a super lower boundary)... so if we see, say, a dozen complaints that's likely a really low percentage. (which ignores the very important issue of folks being frustrated [i]and not actually complaining[/i]... which is not to be ignored as it's the frustration that is the hidden variable we're seeking to measure, rather than the complaints)

One reason we lean towards production-based testing of things like TrackerSmacker is there's no installed codebase on member machines, to consider: it's all done network-side, and all easily modified by our admin team as-needed. So, for example, if the whole thing proved to be a disaster... we could just rm it from nodes, and everything's back to a clean slate. That kind of flexibility to tinker and fine-tune on-the-fly means the experimental process can move faster, retain time-reversibility, and in general avoid engendering member frustration if an experiment like this ends up being ill-conceived, when all is said and done.

As Graze is wont to say (were he saying it, which he's not... so I'm paraphrasing): put the feature in 'permanent beta' and tune the heck out of it; if it works well, it'll prove itself out - and if not, take the learning from the experience and apply it elsewhere. Well said, Graze! :-)

I think this is an awesome enough feature that I really want it to stay implemented, but not at the cost of turning people away. I'm not knowledgeable precisely in this subject manner, but do you think there is a way to have an opt-out switch in the configuration files or the widget so that if people are affected by it they can choose to disable it and switch to a non-filtered DNS?

It might just be pertinent to wait it out and see if it actually affects users in the long run. Maybe the list will be maintained well enough that it won't be an issue. He did say that it was enabled for a whole week without anyone even having any trouble, maybe we are making too big of deal out of this?

I think this is an awesome enough feature that I really want it to stay implemented, but not at the cost of turning people away. I'm not knowledgeable precisely in this subject manner, but do you think there is a way to have an opt-out switch in the configuration files or the widget so that if people are affected by it they can choose to disable it and switch to a non-filtered DNS?

It might just be pertinent to wait it out and see if it actually affects users in the long run. Maybe the list will be maintained well enough that it won't be an issue. He did say that it was enabled for a whole week without anyone even having any trouble, maybe we are making too big of deal out of this?

I reckon there should be an exit node or two that do not cater for the serer-side adblocks. I've been using Acrylic DNS for ages and have managed to get quite a block list going. It also supports wildcards which makes things much easier (slapping a block on *usercontent* blocks everything with that term in it, same thing for *microsoft*, *twitter*, *akamai*, etc...). I am not sure if I will be able to use this when v3 Windows Widget comes out since it will incorporate DNS Crypt, but am willing to tinker with it to see if it can work in tandem.

I found one clash already, not sure if it is my side; haven't tested fully yet. Cannot play animated GIFs or videos on Twitter anymore. Anyone else experiencing this? I have a workaround since I can just download the video using a 3rd party app called Internet Download Manager... just curious is all.

I reckon there should be an exit node or two that do not cater for the serer-side adblocks. I've been using Acrylic DNS for ages and have managed to get quite a block list going. It also supports wildcards which makes things much easier (slapping a block on *usercontent* blocks everything with that term in it, same thing for *microsoft*, *twitter*, *akamai*, etc...). I am not sure if I will be able to use this when v3 Windows Widget comes out since it will incorporate DNS Crypt, but am willing to tinker with it to see if it can work in tandem.

I found one clash already, not sure if it is my side; haven't tested fully yet. Cannot play animated GIFs or videos on Twitter anymore. Anyone else experiencing this? I have a workaround since I can just download the video using a 3rd party app called Internet Download Manager... just curious is all.

I'm torn. don't need this myself- already blocked at router with privoxy, and host.txt at OS, and ublock on browser. (overkill I know.. redundancy for the win) I DO think it's a good idea in general though (obviously, or I wouldn't do so on my own network).

As Khariz pointed out, the autoupdate is the rub. I think that might be a security risk? Is there any sort of imput validation on the list? IE: can it redirect to anything besides 0.0.0.0? I only update my lists quarterly or so, and I take the time to check them manually...never have found anything out of place.

I'm torn. don't need this myself- already blocked at router with privoxy, and host.txt at OS, and ublock on browser. (overkill I know.. redundancy for the win) I DO think it's a good idea in general though (obviously, or I wouldn't do so on my own network).

As Khariz pointed out, the autoupdate is the rub. I think that might be a security risk? Is there any sort of imput validation on the list? IE: can it redirect to anything besides 0.0.0.0? I only update my lists quarterly or so, and I take the time to check them manually...never have found anything out of place.

Khariz wrote:I guess my biggest concern is that we are talking about a third-party maintained database of sites. I realize CS doesn't have the manpower to have someone dedicated to adding and removing things from the database manually, but how much could we actually expect CS to actually bother to edit the database to reflect user desire on unblocking sites.

Every time the third party list updates and CS re-imports the new database, they will have to go back and re-remove the stuff that we have asked to be removed. That seems unlikely to occur in a timely fashion.

I'm actually quite enjoying the new features. Just for giggles, I uninstalled some of my adblockers on various devices and I'm seeing hardly any ads at all. It works pretty good. But I think Tealc has valid concerns.

Concerned CS users probably need to hop onto the github repo upstream and help them make sure that the entire experience is smooth, it might need to be a community effort. I'll probably get on there myself from time to time if I see any issues that need resolved.

[quote="Khariz"]I guess my biggest concern is that we are talking about a third-party maintained database of sites. I realize CS doesn't have the manpower to have someone dedicated to adding and removing things from the database manually, but how much could we actually expect CS to actually bother to edit the database to reflect user desire on unblocking sites.

Every time the third party list updates and CS re-imports the new database, they will have to go back and re-remove the stuff that we have asked to be removed. That seems unlikely to occur in a timely fashion.

I'm actually quite enjoying the new features. Just for giggles, I uninstalled some of my adblockers on various devices and I'm seeing hardly any ads at all. It works pretty good. But I think Tealc has valid concerns.[/quote]

Concerned CS users probably need to hop onto the github repo upstream and help them make sure that the entire experience is smooth, it might need to be a community effort. I'll probably get on there myself from time to time if I see any issues that need resolved.

I guess my biggest concern is that we are talking about a third-party maintained database of sites. I realize CS doesn't have the manpower to have someone dedicated to adding and removing things from the database manually, but how much could we actually expect CS to actually bother to edit the database to reflect user desire on unblocking sites.

Every time the third party list updates and CS re-imports the new database, they will have to go back and re-remove the stuff that we have asked to be removed. That seems unlikely to occur in a timely fashion.

I'm actually quite enjoying the new features. Just for giggles, I uninstalled some of my adblockers on various devices and I'm seeing hardly any ads at all. It works pretty good. But I think Tealc has valid concerns.

I guess my biggest concern is that we are talking about a third-party maintained database of sites. I realize CS doesn't have the manpower to have someone dedicated to adding and removing things from the database manually, but how much could we actually expect CS to actually bother to edit the database to reflect user desire on unblocking sites.

Every time the third party list updates and CS re-imports the new database, they will have to go back and re-remove the stuff that we have asked to be removed. That seems unlikely to occur in a timely fashion.

I'm actually quite enjoying the new features. Just for giggles, I uninstalled some of my adblockers on various devices and I'm seeing hardly any ads at all. It works pretty good. But I think Tealc has valid concerns.

I will have to respectfully disagree with Tealc. I don't view this as a form of internet censorship as long as they at least make an effort to listen to customer concerns. Considering that this same DNS system allows us to access onion sites natively, I would think that balances the equation. Been using it lately and have to agree that it is a definite impovement over extensions like uBlock, which was great as is. The thing that I am most excited for is the fact that this should make the CS servers much more healthier since they are not going to be having to process all of those extra DNS requests as well as the bandwidth that goes along with them. I've already seen an improvement in response times that make this feel like a supercharged network. I am impressed.

I will have to respectfully disagree with Tealc. I don't view this as a form of internet censorship as long as they at least make an effort to listen to customer concerns. Considering that this same DNS system allows us to access onion sites natively, I would think that balances the equation. Been using it lately and have to agree that it is a definite impovement over extensions like uBlock, which was great as is. The thing that I am most excited for is the fact that this should make the CS servers much more healthier since they are not going to be having to process all of those extra DNS requests as well as the bandwidth that goes along with them. I've already seen an improvement in response times that make this feel like a supercharged network. I am impressed.

May I be the devil advocate?
I believe that CS shouldn't use adware blockers at all.

I've already tried this with my openwrt router (couple of years ago), and no matter what list you find out there, we will see a lot of users saying that they can't connected to this or that site, in a nutshell this kind of blockage should be always up to the user & not impose by the CS VPN!

You should maybe have one dnscrypt with adware blocker but never forced by the push request in the openvpn server side conf file.

We can consider this a limitation of my "internet freedom", "My data, my problem", I chose a VPN to be free of my ISP restrictions and EU data retention, so please CS staff consider a new approach on this stuff.

In the long run this kind of stuff will bring more problems them solutions,

Your's truly,
Tealc

May I be the devil advocate? [b][u]I believe that CS shouldn't use adware blockers at all.[/u][/b]

I've already tried this with my openwrt router (couple of years ago), and no matter what list you find out there, we will see a lot of users saying that they can't connected to this or that site, in a nutshell this kind of blockage should be always up to the user & not impose by the CS VPN!

You should maybe have one dnscrypt with adware blocker but never forced by the push request in the openvpn server side conf file.

We can consider this a limitation of my "internet freedom", "My data, my problem", I chose a VPN to be free of my ISP restrictions and EU data retention, so please CS staff consider a new approach on this stuff.

In the long run this kind of stuff will bring more problems them solutions,

Do note that we're pulling from an external blacklist - not attempting to create such a thing from thin air. Which would be... eeek. Anyhow, I think the underlying repo is open for pull requests and stuff, so if there's something in there that really shouldn't be, it might be worth going upstream (as it were) and seeing if it's appropriate to rm from that resource itself.

(though yes we can pull stuff, or otherwise mod, downstream - though we've not official process for tracking such requests and edits thus yet... which sets the stage for much sad, down the line, if we don't get that process in place early - imho)

Cheers.

[quote="LoveTheStorm"]Ps. also http://www.datafilehost.com/ is blocked. Seems a bit much :shock:[/quote]

Do note that we're pulling from an [url=https://github.com/StevenBlack/hosts][b]external blacklist[/b][/url] - not attempting to create such a thing from thin air. Which would be... eeek. Anyhow, I think the underlying repo is open for pull requests and stuff, so if there's something in there that really shouldn't be, it might be worth going upstream (as it were) and seeing if it's appropriate to rm from that resource itself.

(though yes we can pull stuff, or otherwise mod, downstream - though we've not official process for tracking such requests and edits thus yet... which sets the stage for much sad, down the line, if we don't get that process in place early - imho)

LoveTheStorm wrote:Hi PJ, first well done. I am loving this. Crypto love!
I am already using Crypto dnscrypt from start for all my connections, not only vpn.

We need to actually announce the public deepDNS resolvers: they're really handy, and it'd be great for more folks to know they exist. It's been on our core team to-do list for, ummm... a couple years? Ouch.

Also, you can whitelist adfly? I use often it and from today i started to switch to Holland dns dnscrypt i.e. when i need to use adfly and come back to storm dns then. With this whitelist i can avoid that switch and stay with storm dns eheh

I will write here if i see something other that can be whitelisted for better use.

What would be (might be?) cool is to have some "unfiltered" deepDNS resolvers that don't have TrackerSmacker running on them... for testing, or for people who need full lookups eh? It would make sense....

Anyhow, we've been tuning the whitelist as we go - some stuff got randomly blocked in the early rollout, and it's a process of learning how to make sure it's not overly aggressive. Fwiw, my own hope is to see our whitelist jump over to the github repo so it's public and easy for folks to commit/pull into - which scales better than manual stuff. But for now I think we're doing it sort of manually... if you're bored and want to set up something in github, lemme know your handle there and I'll gladly auth you (and anyone else) into the repo with write privs so we can get that going.

Anyway really amazing work man, you all here are a great team. I love the Storm!

Cheers, mate - it's an honour to be of service. Genuinely so.

[quote="LoveTheStorm"]Hi PJ, first well done. I am loving this. Crypto love! :DI am already using Crypto dnscrypt from start for all my connections, not only vpn.[/quote]

We need to actually announce the public deepDNS resolvers: they're really handy, and it'd be great for more folks to know they exist. It's been on our core team to-do list for, ummm... a couple years? Ouch.

[quote]You know when here in the official list https://download.dnscrypt.org/dnscrypt-proxy/ (dnscrypt-resolvers.csv)will be ALL crypto dns, for example there is not Moldova, etc.., so we can use them all always.[/quote]

Adding to aforementioned to-do list. :-P

[quote]Also, you can whitelist adfly? I use often it and from today i started to switch to Holland dns dnscrypt i.e. when i need to use adfly and come back to storm dns then. With this whitelist i can avoid that switch and stay with storm dns eheh

I will write here if i see something other that can be whitelisted for better use.[/quote]

What would be (might be?) cool is to have some "unfiltered" deepDNS resolvers that don't have TrackerSmacker running on them... for testing, or for people who need full lookups eh? It would make sense....

Anyhow, we've been tuning the whitelist as we go - some stuff got randomly blocked in the early rollout, and it's a process of learning how to make sure it's not overly aggressive. Fwiw, my own hope is to see our whitelist jump over to the github repo so it's public and easy for folks to commit/pull into - which scales better than manual stuff. But for now I think we're doing it sort of manually... if you're bored and want to set up something in github, lemme know your handle there and I'll gladly auth you (and anyone else) into the repo with write privs so we can get that going.

[quote]Anyway really amazing work man, you all here are a great team. I love the Storm! :clap: :thumbup:[/quote]

Also, you can whitelist adfly? I use often it and from today i started to switch to Holland dns dnscrypt i.e. when i need to use adfly and come back to storm dns then. With this whitelist i can avoid that switch and stay with storm dns eheh

I will write here if i see something other that can be whitelisted for better use.

Anyway really amazing work man, you all here are a great team. I love the Storm!

Hi PJ, first well done. I am loving this. Crypto love! :DI am already using Crypto dnscrypt from start for all my connections, not only vpn.

You know when here in the official list https://download.dnscrypt.org/dnscrypt-proxy/ (dnscrypt-resolvers.csv)will be ALL crypto dns, for example there is not Moldova, etc.., so we can use them all always.

Also, you can whitelist adfly? I use often it and from today i started to switch to Holland dns dnscrypt i.e. when i need to use adfly and come back to storm dns then. With this whitelist i can avoid that switch and stay with storm dns eheh

I will write here if i see something other that can be whitelisted for better use.

Anyway really amazing work man, you all here are a great team. I love the Storm! :clap: :thumbup:

Ps. also http://www.datafilehost.com/ is blocked. Seems a bit much :shock:

NEW THING! - there's now a parallel, dedicated forum thread here for the more philosophically-driven critiques of TrackerSmacker... take a look, if that's where you'd like to dip an oar (so to speak). Thanks!

Since we moved from years of study and admittedly obsessive analysis, and into providing our own cryptostorm-maintained Domain Name Service (DNS) resolver architecture - which we named deepDNS (because we got tired of referring to it in our team discussions & IRC chat as "the in-house cstorm DNS resolvers") - we've been breaking new ground in exploring all the ways that doing really good DNS resolution service can improve network security for our members and for the wider community online.

[/url]

That's really no surprise... but it's still a little bit surprising just how powerful deepDNS really has the potential to be. After all, DNS is one of the fundamental building blocks of internet functionality: there's no internet without DNS. Plus, DNS itself is notoriously riddled with all sorts of security issues, known vulnerabilities, and all but uncountable ways that it can be attacked successfully and with devastating effect (see also: Dan Kaminski ). So, doing DNS better - not perfectly, mind you... but still better than it is normally done - for our members and the community can really be a Good Thing. It is, actually, a Good Thing; we've already seen that, in how we enable things like transparent .onion/i2p access, and how we implement DNScurve and DNSchain protections. Which is all... really good.

[/url]

But wait, there's more. Turns out, there's the possibility to do alot more.

In recent discussions amoungst our core team, the idea of doing DNS-based ad-blocking came up. This isn't a new idea, to be clear: it's been done, and discussed, and explored by other smart folks and it's not something we came up with out of thin air (nothing really is, because any really good idea has already been noted by researchers long before it's ready to implement by a team such as ours - by definition). Once we started kicking the idea around, however, we immediately saw how powerful it could be in the context of our network itself.

This isn't the right place to do a full analysis of the various ways adware/crapware and ad-tracking spyware breaks the internet, hammers privacy, enables spy agency surveillance, and also makes all sorts of routine daily 'net activities slow, dysfunctional, and generally awful. We all know this is true; what used to be a sort of marginal concern (mostly related to security/privacy damage) has become really mainstream in terms of why ad-tracking crapware is evil. If for no other reason, all this tracking stuff that adware uses to throw more and more targeted ads at us makes a big chunk of websites on the internet so bloody slow as to be all but useless. Plus, it straight-up causes browsers to crash when some ad-heavy websites are visited... and that's the legitimate news websites! (try visiting some tracker sites, or "adult" sites, and they simply refuse to load no matter how fast a 'net connection one might have). Then there's the huge impact this garbage has on smartphone-based web browsing... basically, the list of bad things coming from ad-tracking crapware is really long, really deep, and impossible to ignore nowadays.

So, unsurprisingly, there's all sorts of counter-tech that exists out there. Most has the best of intentions... but even with that alot of it has become dysfunctional itself. For example, some long-popular "adblock" browser extensions are nowadays so bloated, inefficient, and complex that they themselves slow browser performance to a crawl... and some even allow ad networks to pay for whitelist status, as a revenue source! We're not passing judgement here, to be clear. What we are saying is that most every approach to limiting this ad-tracking crapware has its own laundry list of unintended symptoms, costs, and frustrations associated with it.

Not using any of it, however, results, in web browsing that's often slow, buggy, rendered poorly, littered with pop-ops, bogged down with crash-y javascript... and of course so not-private it's almost impossible to overstate. So it's a lose/lose sort of decision we all have to make, in terms of what anti-adware tools we use (along with their side effects) versus how much ad-tracking crapware we're willing to put up with (in terms of all the evils it brings).

Technically, what we're doing with TrackerSmacker is elegantly simple: we take a nicely-maintained (and opensource) list of known-crapware ad-tracking domain names and URLs, and we block DNS queries made via deepDNS that relate to those ad-tracker nasties. Because everyone on cryptostorm's network is, by definition, using deepDNS resolvers (which are "pushed" during cstorm connection in the current "Narwhal" widget - and which will be pushed even pre-connection in the new "Black Dolphin" widget 3.0), that means that every web browsing session whilst on-cstorm is filtered of all this ad-tracking crapware. Members need not install anything, do anything, change anything, or in any way fiddle with stuff in order to get this benefit. It... just works - the best kind of tech there is, tbh!

Better yet, and unlike adblock-style browser extensions, TrackerSmacker prevents the ad-tracking crapware from even being downloaded or pushed in any way to the browser in the first place. That's different from ad-blockers that live in the browser, which have the hard job of looking at stuff after it's already been pulled from a webserver and deciding whether to render it in the browser. TrackerSmacker blocks the DNS resolution of the crapware itself - it never gets to the browser, never gets parsed by an extension or the browser's own render (or .js) engine, and never even comes across cryptostorm's network. Like we said, it's elegant... damned elegant. And it works really, really well.

Earlier versions of DNS-based ad-tracker blocking required folks to manually set their local DNS resolvers to a new resolver that did the blocking for them. That's fine, sorta... but beyond what most folks want to have to do in order to block ads - also it doesn't always stay working and needs to be done repeatedly in alot of OS contexts, in order to "stick" over time. Since we do this at the deepDNS-resolver level of cryptostorm's network, all that fiddling is simply not needed. Indeed, we implemented TrackerSmacker behind the scenes, last week, without any need to tell folks about how it works in order for it to work.

That's right: since last week, if you're using cstorm, you're hand has already been soaking in the luxuriously adware-filtered softness of TrackerSmacker!

True to form, we've created a new github repository for the deepdns-TrackerSmacker function - and we'll be publishing there the syntax we use to enable it, the whitelist/blacklist exceptions or additions that we make based on community input, and so on. Which is to say: the details of how TrackerSmacker works, and how we've implemented it, are far from secret or nonpublic. We're looking forward to ongoing community assistance in fine-tuning the way we provider TrackerSmacker protection within the deepDNS context.

And guess what? Because we maintain a (not officially announced, but long-since-supported fully) public pool of deepDNS-powered resolvers, anyone who wants to can benefit from deepDNS... even if (for some mysterious reason) they aren't using the cryptostorm network itself. At no cost: free. That requires manually changing local DNS settings, of course... but even so, it's pretty useful, and pretty cool, that anyone can take advantage of TrackerSmacker.

This post is already longer than it should be, which happens - and we've not yet included some technical details that certainly will be important as TrackerSmacker continues to evolve and expand its ability to block garbage from network sessions. Rather than bogging it down further, we're going to wrap up this introductory post and open the thread for questions, suggestions, discussions, and so forth. Ah yah: we're even talking about doing a "real" press release - wow! - so if you or someone you know is press-release-savvy and you'd like to help with that, drop a note in here and we'll be really happy to take up the offer of assistance.

TrackerSmacker is cool, it really is. It makes websites with lots of ads on them load way, way faster - and not be crashy, bloated, and laggy when scrolling. With fine-tuning, it'll continue to improve and to add more benefits for anyone who wants to make use of the deepDNS resolvers. We're not anti-advertising at a philosophical level, nor even particularly obsessive about the privacy impact of ad-tracking crap (which is pretty seriously negative, even in the best of interpretations)... but we have seen this stuff turn into a serious pothole on the internet. And we just filled that pothole, with TrackerSmacker - or whatever metaphor works better than that. Whatever - it's cool.

DeepDNS started as something we created because the alternative tools out there weren't quite up to cryptostorm's standards of functionality, privacy, and security. Since that start a few years back, it's expanded into it's own thing - in some sense, with a broader reach than cryptostorm itself. Who knows... perhaps deepDNS will fly the nest and become a big, cool, standalone success story that overshadows cryptostorm itself. Stranger things have happened, eh?

Meanwhile, we're proud to be where deepDNS started - and where TrackerSmacker got going, too! W00d.

[color=#800080][u]NEW THING![/u] - there's now a [url=https://cryptostorm.ch/viewtopic.php?f=46&t=8991][b]parallel, dedicated forum thread here[/b][/url] for the more philosophically-driven critiques of TrackerSmacker... take a look, if that's where you'd like to dip an oar (so to speak). Thanks![/color]

Since we moved from [url=https://cryptostorm.ch/viewtopic.php?f=46&t=2778][b]years of study[/b][/url] and [url=https://cryptostorm.ch/viewtopic.php?f=46&t=6618][b]admittedly obsessive analysis[/b][/url], and into providing our own [url=https://cryptostorm.is][b]cryptostorm[/b][/url]-maintained Domain Name Service (DNS) resolver architecture - which we named [url=http://deepdns.net][b]deepDNS[/b][/url] (because we got tired of referring to it in our team discussions & [url=https://cryptostorm.is/chat][b]IRC chat[/b][/url] as "the in-house cstorm DNS resolvers") - we've been breaking new ground in exploring all the ways that doing really good DNS resolution service can improve network security for our members and for the wider community online.

That's really no surprise... but it's still a little bit surprising just how powerful deepDNS really has the potential to be. After all, DNS is one of the fundamental building blocks of internet functionality: there's no internet without DNS. Plus, DNS itself is notoriously riddled with all sorts of security issues, known vulnerabilities, and all but uncountable ways that it can be attacked successfully and with devastating effect (see also: [url=https://twitter.com/dakami][b]Dan Kaminski[/b][/url] :-P ). So, doing DNS better - not perfectly, mind you... but still better than it is normally done - for our members and the community can really be a Good Thing. It is, actually, a Good Thing; we've already seen that, in how we enable things like [url=https://cryptostorm.ch/viewtopic.php?f=62&t=7672][b]transparent .onion/i2p access[/b][/url], and how we implement [url=https://cryptostorm.ch/viewtopic.php?f=46&t=8539][b]DNScurve and DNSchain protections[/b][/url]. Which is all... really good. :-)

But wait, there's more. Turns out, there's the possibility to do [i]alot[/i] more.

In recent discussions amoungst our core team, the idea of doing DNS-based ad-blocking came up. This isn't a new idea, to be clear: it's been done, and discussed, and explored by other smart folks and it's not something we came up with out of thin air (nothing really is, because any really good idea has already been noted by researchers long before it's ready to implement by a team such as ours - by definition). Once we started kicking the idea around, however, we immediately saw how powerful it could be in the context of our network itself.

This isn't the right place to do a full analysis of the various ways adware/crapware and ad-tracking spyware breaks the internet, hammers privacy, enables spy agency surveillance, and also makes all sorts of routine daily 'net activities slow, dysfunctional, and generally awful. We all know this is true; what used to be a sort of marginal concern (mostly related to security/privacy damage) has become really mainstream in terms of why ad-tracking crapware is evil. If for no other reason, all this tracking stuff that adware uses to throw more and more targeted ads at us makes a big chunk of websites on the internet so bloody slow as to be all but useless. Plus, it straight-up causes browsers to crash when some ad-heavy websites are visited... and that's the [i]legitimate[/i] news websites! (try visiting some [url=http://katstorm.faith][b]tracker sites[/b][/url], or [url=https://pornbay.org][b]"adult" sites[/b][/url], and they simply refuse to load no matter how fast a 'net connection one might have). Then there's the huge impact this garbage has on smartphone-based web browsing... basically, the list of bad things coming from ad-tracking crapware is really long, really deep, and impossible to ignore nowadays.

So, unsurprisingly, there's all sorts of counter-tech that exists out there. Most has the best of intentions... but even with that alot of it has become dysfunctional itself. For example, some long-popular "adblock" browser extensions are nowadays so bloated, inefficient, and complex that they themselves slow browser performance to a crawl... and some even allow ad networks to pay for whitelist status, as a revenue source! We're not passing judgement here, to be clear. What we are saying is that most every approach to limiting this ad-tracking crapware has its own laundry list of unintended symptoms, costs, and frustrations associated with it.

Not using any of it, however, results, in web browsing that's often slow, buggy, rendered poorly, littered with pop-ops, bogged down with crash-y javascript... and of course so not-private it's almost impossible to overstate. So it's a lose/lose sort of decision we all have to make, in terms of what anti-adware tools we use (along with their side effects) versus how much ad-tracking crapware we're willing to put up with (in terms of all the evils it brings).

Blah. That sucks. So we made TrackerSmacker(h/t [url=https://twitter.com/FalsNameMcAlias][b]@FalsNameMcAlias[/b][/url]). :mrgreen:

[attachment=4]IMG_20160311_131053.jpg[/attachment]

Technically, what we're doing with TrackerSmacker is elegantly simple: we take a [url=https://github.com/StevenBlack/hosts][b]nicely-maintained (and opensource) list of known-crapware ad-tracking domain names and URLs[/b][/url], and we block DNS queries made via deepDNS that relate to those ad-tracker nasties. Because everyone on cryptostorm's network is, by definition, using deepDNS resolvers (which are "pushed" during cstorm connection in the [url=https://cryptostorm.ch/viewtopic.php?f=47&t=8544][b]current "Narwhal" widget[/b][/url] - and which will be pushed even pre-connection in the new "Black Dolphin" widget 3.0), that means that every web browsing session whilst on-cstorm is filtered of all this ad-tracking crapware. Members need not install anything, do anything, change anything, or in any way fiddle with stuff in order to get this benefit. It... just works - the best kind of tech there is, tbh!

[attachment=6]Screenshot (2).png[/attachment]

Better yet, and unlike adblock-style browser extensions, TrackerSmacker prevents the ad-tracking crapware from even being downloaded or pushed in any way to the browser in the first place. That's different from ad-blockers that live in the browser, which have the hard job of looking at stuff [i]after it's already been pulled from a webserver[/i] and deciding whether to render it in the browser. TrackerSmacker blocks the DNS resolution of the crapware itself - it never gets to the browser, never gets parsed by an extension or the browser's own render (or .js) engine, and never even comes across cryptostorm's network. Like we said, it's elegant... damned elegant. And it works really, really well.

Earlier versions of DNS-based ad-tracker blocking required folks to manually set their local DNS resolvers to a new resolver that did the blocking for them. That's fine, sorta... but beyond what most folks want to have to do in order to block ads - also it doesn't always stay working and needs to be done repeatedly in alot of OS contexts, in order to "stick" over time. Since we do this at the deepDNS-resolver level of cryptostorm's network, all that fiddling is simply not needed. Indeed, we implemented TrackerSmacker behind the scenes, last week, without any need to tell folks about how it works in order for it to work.

That's right: since last week, if you're using cstorm, you're hand has [i][url=http://plowingthroughlife.blogspot.com/2014/03/youre-soaking-in-it.html][b]already been soaking[/b][/url][/i] in the luxuriously adware-filtered softness of TrackerSmacker! ;-)

[attachment=7]537050_551871288186247_429383099_n.jpg[/attachment]

True to form, we've created a new [url=https://github.com/deepDNS/TrackerSmacker][b]github repository[/b][/url] for the deepdns-TrackerSmacker function - and we'll be publishing there the syntax we use to enable it, the whitelist/blacklist exceptions or additions that we make based on community input, and so on. Which is to say: the details of how TrackerSmacker works, and how we've implemented it, are far from secret or nonpublic. We're looking forward to ongoing community assistance in fine-tuning the way we provider TrackerSmacker protection within the deepDNS context.

And guess what? Because we maintain a (not officially announced, but long-since-supported fully) public pool of deepDNS-powered resolvers, anyone who wants to can benefit from deepDNS... even if (for some mysterious reason) they aren't using the cryptostorm network itself. At no cost: free. That requires manually changing local DNS settings, of course... but even so, it's pretty useful, and pretty cool, that anyone can take advantage of TrackerSmacker.

[attachment=5]Screenshot (6).png[/attachment]

This post is already longer than it should be, which happens - and we've not yet included some technical details that certainly will be important as TrackerSmacker continues to evolve and expand its ability to block garbage from network sessions. Rather than bogging it down further, we're going to wrap up this introductory post and open the thread for questions, suggestions, discussions, and so forth. Ah yah: we're even talking about doing a "real" press release - wow! - so if you or someone you know is press-release-savvy and you'd like to help with that, drop a note in here and we'll be really happy to take up the offer of assistance.

TrackerSmacker is cool, it really is. It makes websites with lots of ads on them load way, [i]way[/i] faster - and not be crashy, bloated, and laggy when scrolling. With fine-tuning, it'll continue to improve and to add more benefits for anyone who wants to make use of the deepDNS resolvers. We're not anti-advertising at a philosophical level, nor even particularly obsessive about the privacy impact of ad-tracking crap (which is pretty seriously negative, even in the best of interpretations)... but we have seen this stuff turn into a serious pothole on the internet. And we just filled that pothole, with TrackerSmacker - or whatever metaphor works better than that. Whatever - it's cool. :thumbup:

DeepDNS started as something we created because the alternative tools out there weren't quite up to cryptostorm's standards of functionality, privacy, and security. Since that start a few years back, it's expanded into it's own thing - in some sense, with a broader reach than cryptostorm itself. Who knows... perhaps deepDNS will fly the nest and become a big, cool, standalone success story that overshadows cryptostorm itself. Stranger things have happened, eh?

Meanwhile, we're proud to be where deepDNS started - and where TrackerSmacker got going, too! W00d.

[i]<insert not actually gratuitous h/t to our friend [url=https://twitter.com/eyebrain][b]ntldr[/b][/url] for his help brainstorming the early structure of TrackerSmacker... but this pic is in fact totally gratuitous, so there's that :angel: >[/i][attachment=0]ojeexs.png[/attachment]