Cloud hosting, IIS, PS, Cloudbleed, CDN, Hashing, 2G networks

Post date: Nov 12, 2017 3:28:49 AM

Don't put everything in one basket. It seems that many of the Cloud providers say, that it's best that if you're only using their proprietary platforms. I find that horrible. What if something happens, you've got code base which you can't run elsewhere. And you're basically seriously vendor locked. What if that happens suddenly and by accident and you'll have to move NOW to another platform, because the platform you used to use, is now dead. That's why I like bit more versatile setups, and having preexisting agreements with several cloud providers. If provider A got issues, we can relocate quickly, everything to provider B or C. We're familiar how providers B and C work, we've got agreements ready and everything can be done easily in less than 24 hours. It would be nice to see how long some provider which got serious Amazon lock-in, would take to transfer everything from Amazon to Google Platform or Azure. It should be just switching service provider, not more complex than that.

Enjoyed IIS fun, no support for SNI. Awesome, so much about using Let's Encrypt properly for several sites. Sigh. Well, maybe I should just update that system to Windows Server 2016. Otherwise let's encrypt worked really nicely, when configured using default paths and no access restrictions. Of course sites which require basic auth, didn't work out of the box. It was required to configure anonymous access for .well-known path.

Had more fun with PowerShell and PowerShell ISE + different execution policies and so on. But after a bit of tuning and playing I got what I wanted to work as scheduled task. I'm not fluent with PowerShell, but with dedication and sweat I'll get done what's required. Actually I'm pretty sure that PowerShell could be used for most of cases for the stuff I'm using Python mostly for.

CloudBleed, the Cloudflare memory leak. Lot of upset people. I don't know. It sounds just so common programming fail, that there's nothing special about that. Only problem is that some people clearly choose to transmit confidential data using large shared cache, with bugs. Well, what did you expect? We can check history and this is nothing 'new' or 'special'. Same applies to the SHA1 case. I might sound pretty cynical, but that's just the truth. Things work, when those work, and sometimes, even often, won't work. That's how it is. Of course fixing the issue in this case won't erase the leaked data from search engines, and any other services which store data from web sites. Like different kind of archives, and data on users browser caches, etc. So definitely not a nice thing, but nothing special in generic security context. - I noticed someone else has been having similar thoughts, and those generally weren't really liked. Well, trugh isn't always nice. - Had a chat with one friend about this. He immediately said that Cloudflare is only used by static assets. All PII information is passed directly to the system passing Cloudflares caches. That naturally is exactly what everyone should have been doing. There's no point of using CDN for user specific non-cacheable data.

What are the real pros and cons using something like Cloudflare. Early SSL termination, nice. Shorter round trip with end users (helps with packet loss recovery). Quicker TCP window growth, etc. Should APIs pass through Cloudflare or access API servers directly. Is there any benefit from caching? Content Delivery Network (CDN) can validate object every now and then and let the client know that it's still valid, like using last-modified, expires and etag.

Time passes, 2G networks are being shutdown around the world. Some are already gone, and many are going down in 2017 and following years soon after that. This opens great changes for Sigfox and other technologies like LoRaWAN. Even old 2G systems need to be upgraded to use new 'modems' for signaling.

Made with the new Google Sites, an effortless way to create beautiful sites.