JustAnotherOldGuy writes: Marco Marsala appears to have deleted his entire company with one mistaken piece of code. By accidentally telling his computer to delete everything in his servers, the hosting provider has seemingly removed all trace of his company and the websites that he looks after for his customers. Marsala wrote on a Centos help forum, "I run a small hosting provider with more or less 1535 customers and I use Ansible to automate some operations to be run on all servers. Last night I accidentally ran, on all servers, a Bash script with a rm -rf {foo}/{bar} with those variables undefined due to a bug in the code above this line. All servers got deleted and the offsite backups too because the remote storage was mounted just before by the same script (that is a backup maintenance script)." The terse "rm -rf" is so famously destructive that it has become a joke within some computing circles, but not to this guy. Can this example finally serve as a textbook example of why you need to make offsite backups that are physically removed from the systems you're archiving?

This is about how we treat data of a citizen from one large jurisdiction when it moves to or is stored in another large jurisdiction, and removing legal uncertainty for the companies doing so. For example, this very site's account info of EU residents being stored in the US (handle, email and encrypted password). Nothing overly private, but still falls under privacy laws of hundreds of countries, each of which could voice a problem and issue a warrant or subpoena. Without overarching legal frameworks governing and taming this legal diversity and uncertainty, it is basically impossible to run a large website. Plain and simple. If you're an engineer, you absolutely want to be insulated and protected from all this possible BS, regardless of how much of a non-issue your own data collection might be to your engineering mind.

grrlscientist writes: Free-living Mexican red-crowned parrots have been adapting so well to urban life in California and Texas that their population numbers may rival those in their native Mexico, says a team of US researchers. This has important implications for conservation.

v3rgEz writes: Jeff Bezos is bullish on the cloud, pegging AWS' sales for this year at $10 billion in a recent letter to shareholders. But he said there was a surprising source of that success: The company's willingness to fail. That said, with AWS now spanning 70 different services, Amazon can afford to fail some as long as few, like EC2 and S3, keep winning.

Even is ISPs are relatively transparent about what they sell you, it is always about maximum download and upload speed, and never about latency and quality of service. In fact, sales and first-tier support folks don't even know these terms, much less what their company's typical values are. In practice, a stable, low latency broadband connection with 15 Mbit/s cap gives you a better overall experience than a jerky, high latency connection which on paper tops out at 50 Mbit/s.

I am very glad the FCC is including these numbers by default to judge a provider's disclosure practices.As an aside, test your connection at https://www.voipreview.org/spe... and see your latency, jitter and packet loss alongside the other metrics.

If you've configured your site to allow arbitrary content from unknown third-parties, your site is compromised by design. If the mere act of rendering the content that your site is sufficient to get malware, then, yes, your page is compromised. Doesn't matter if the source of the malware was in somebody else's ad service. If that service feeds data directly into your site that you then present to your visitors without any sort of vetting or filtering, then you've allowed that malware to compromise your site.

You do realize that a site only embeds the ad network code, not the final downloaded content? I.e. yes, a site takes some sort of responsibility when deciding to run ads from an ad network. Beyond that, however, every user gets potentially different ads. There are real time bidding platforms and user profiling code in the middle, completely outside the direct control of the website.

The ads appeared when I visited those websites, therefore it appears the websites are responsible for spreading the malware.

If it were that easy this wouldn't be a problem. You've got a least three players here: The website running ads and trying to fight off the bad stuff, the ad networks which only sometimes care enough, and the advertiser trying to game the system into running bad ads. It's a continuous arms race, and as a website owner you end up in reactive mode, rather than proactive.

Here's an idea: How about someone writes an ad blocker that DOWNLOADS the ads, just like normal, but simply does not RENDER them on the screen, or execute any code? Seems like the best of both worlds: users that don't want to see the ads don't see them, and websites still get paid, since there's no way to tell if they actually got shown?

Won't work anymore. Big advertisers want proof that their ad was seen, via Double Verify or similar, and only pay for ads that were in front of users for a certain amount of time. Javascript and CSS make this easy to measure, and hard to work around.

Well, some of us on the new team have been around for a while. I remember a magenta and white version of/. run out of Rob Malda's dorm room.It's like meeting up with a long lost friend - happy to be involved behind the scenes now.

HTTPS also blinds "proxies" and antivirus software which may have their own opinions of what should and should not travel over plain old port 80. ISPs have done stunts like ad injection, antivirus software routinely blocks websockets, and on and on. HTTPS is a godsend around this bullshit.