Posts by zanshin

Page:

Re: Tricky one - does one attribute affect the other.

With the massive caveat that I'm not a student of US jurisprudence, I'm not aware of anything about this situation having a risk of invalidating this judge's previous rulings, even if the charges of inappropriate behavior are found to be true. It might be different if a judge were found guilty of misdeeds where their past rulings involved clear conflicts of interest with their actions. This doesn't seem to be such a case.

Should we losing a clear-minded and forward thinking judge, though, that's clearly a regrettable state of affairs, even *if* it does turn out to be justified. And I do think that's what our intrepid reporter was pointing out.

Re: Python whitespace

I've been a Python coder for 10 years, and I've never had a significant problem with the tabs vs spaces thing. Like, it maybe pops up once every ... four, five months? And either an editor feature or a search/replace fixes it lickety-split.

I've also never had a problem getting (what I consider) proper code editors to stick to one standard for Python code. Probably the most common way for me to get in trouble with that is if I hack something with vi on a local machine, but I try to only do that for dire reasons. Most of my code goes from editor to source control to *some* kind of deployment pipeline.

There's tremendous value (for any language) to get something that can really lint your code, preferably inline in your editor. (It looks like lack of that is one of the complaints against PERL?) Any popular Python linter will tell you you've got mixed tabs and spaces. And linters for many languages I've worked with will complain about that even if it's not syntactically relevant.

I'm a bit surprised to see PERL rate highest for hate. I do suppose that's got a lot to do with where one is coming from. I wouldn't try to write anything really complex in PERL - I try only use it as an awk stand-in, but I did have to write CGI code for it back in the day. I don't hate it. I just think there are better languages for most formal programming tasks. And I am keenly aware it's super hard for most people to read and understand a PERL program of any real complexity.

My personal most disliked language would probably be Pascal. I was taught that as a programming language in college, which I thought was laughable at the time. Maybe this was an implementation thing, but the version we had to work with was extremely limited. The closest thing I could get to an array with it was a doubly linked list. Fortunately, I knew how to program (if not how to exercise best practice) before I went to college.

VB is up there on my list of dislikes, though it did serve a useful purpose in its day. I just really disliked its syntax.

Much of the argument as presented by those lobbying is, as is so often the case, presented as a choice between false dichotomies.

What needs to be regulated here is not so much the speed at which we get to enjoy our streaming or downloads, but the opportunity for monopoly behavior by mammoth incumbents who use their size to create vertical or horizontal integration that restricts competition.

In my view, the only valid argument for anything like what people are calling "net neutrality" maps well to the case the EU is making against Google: that it abused its dominance in search to drive customers to its own non-search products at the expense of competing services from other providers.

Translating that to the basic form of what "net neutrality" *ought* to be about, it means that companies like, say, Hulu (to pick a non-FANG company) should feel secure that their service will not be given inadequate treatment by an infrastructure carrier, like Comcast, who also streams their own, in-house content. It's not that there's no conceivable justification for Comcast charging Hulu, but rather that any charges need to be fair and reasonable, and should not be designed to cause structural disadvantage to Hulu relative to Comcast's own streaming services. This is particularly important when the carriers represent an oligopoly or near-monopoly, as they do in the US, since a Hulu has no real choice but to deal with them if they want access to huge parts of the US market.

The idea that protections anti-competitive behavior by carriers requires "treating all traffic the same" is an abuse of the reality that most people have no idea how internet traffic works, or that different types of internet traffic have different quality of service requirements, sometimes by design.

There are other distortions at play as well. Conservatives often hate government regulation, and I think there's merit in the idea that Title II classification for internet carriers is, at best a square peg in a round hole. But there's a difference in not wanting the wrong regulation (or too much "correct" regulation) and wanting no regulations at all. Truly "free markets" are a pipe dream - you *always* need some degree backstop against the human tendency towards greed. Time and time again we've seen that when left to their devices, people will relentlessly converge on maximizing short-term personal/corporate profit at the expense of the broader community. (The phrase "this is why we can't have nice things" comes to mind.) So the question should not be whether we need protections, but what form they should take, and who should be responsible for ensuring companies adhere to them in this case (FTC? FCC?). It would be nice if whoever it was wasn't stacked with friends to the industries they supposedly regulate, but that seems to be a perennial issue.

It doesn't help that, of late, so many of our political parties are highly polarized, and seem more interested in tearing each other down than in actually getting anything done. Compound this with the sides in the "net neutrality" fight in the US seemingly aligning along those partisan divisions in their lobbying efforts and you have a recipe for misinformation, dramatization and, ultimately, failure to really get anything useful done.

Count me among those wishing...

...that "whoever wrote the docs" was an option in the survey.

Really, though, I don't think anyone *needs* to be fired over the actual failure. It's a small company, and they just experienced the results of a comedy of collective errors across various people that are probably the result of technical debt accrued as they got up and running. This never has to happen, but it's pretty common. It's bad, but you learn, adjust and move on. Well, you move on unless the lack of backups in this case means they are so thoroughly screwed that they have to shutter the business.

All that said, I do think the CTO needs to go, not necessarily because this happened, but for how he's handled this. Politically, he might have needed to go anyway by virtue of falling on his sword for owning the collective failures of the team(s) under his control, but the obvious scapegoating of the new hire for what's a really broad failure of IT processes says to me that he's not going to learn from this and get his ship right. One way or another he needs to own this, and firing the new guy who copy/pasted commands right from an internal document doesn't convey that kind of ownership to me.

Re: Isn't puppet , chef , and Jenkins .. CI/CD .... devops

"Suppose to cure this type of HUMAN fkck ups ?"

In a word, no.

Those tools and the processes they support are for automated testing of changes you plan to roll out, and automated deployment of those changes, hopefully after someone or something has approved them. They make replication of change across many environments simple, including setup of servers, environments and so forth.

The people in question were carrying out triage on a production performance issue. "Infrastructure as code" isn't really that helpful during triage. You usually have to dive in and run commands by hand. In such a situation, if what you are trying to resolve is related to production load and scale, you probably cannot replicate it on-demand in a test environment, even if you'd like to. That, in turn, can mean you can't really usefully test the command you plan to run.

Given the nature of AWS/S3, I'm quite sure the command line entered did something heavily automated at scale, and might well have been executed with their equivalent of something like Chef, but *what* it was told to do was likely derived from the triage efforts. You can bork your production environment just fabulously with the wrong command inputs to a tool like Chef. It will dutifully obey you if the command you give it is legit. (They mention that they will change their definition of what's legit based on this experience.)

I certainly do run what I perceive as "dangerous" commands in test environments before I run them in production, just to make sure I got them right. I can then copy-paste them exactly from dev into prod, at least where the command will be identical in either environment. But if I don't think the command is dangerous, possibly just because I've become used to running it without failure, I could conceivably type it out in full confidence and still screw it up. Triple-checking yourself before you hit "enter" is a matter of experience and, too often, not being over-tired or in a rush.

Re: My $0.02 (@JLV)

JLV, I agree that ID theft is different. My reply earlier was specific to the crime in question, and how it specifically was contrasted to robbery. That's even evident in the summary bit you quoted.

ID theft definitely can cause significant disruption to one's personal life, or even the life of a whole family. I'm directly aware, as I've been a victim of it, and my case wasn't even that severe. (It was achieved before Internet use was commonplace, via theft of items from my postal mailbox.) Doing that on a mass scale is extremely disruptive and can have a high overall societal cost.

As far as I can tell, this guy was sending mass mails on behalf of other companies, which, with caveats, does not have such a severe cost to society.

Now, if his mails directly facilitated downstream crimes, that's a compounding factor. For example, if he was spamming out phishing mails, that's a different deal, as he's then facilitating another crime with greater societal cost. He's helping make it possible, even if it doesn't materialize. That's why I mentioned before that I felt that there might be warrant for greater punishment if he'd facilitated the illegal sale of narcotics.

Yes, his compromised botnet needs to be cleaned up, and there's a real cost associated with that. That very much matters. Does it matter enough to justify his jail term being longer than that for a 7/11 robber? I don't know. It depends on how much it really costs to deal with.

Basically, we shouldn't just look at how much money he made in order to determine the severity of his crime. The same is true of the 7/11 robber. We should look at the cost to others of them getting that money, which is not likely to be the same as the amount of money itself.

Re: My $0.02

On the other hand, when you "steel" money by robbing a 7/11, usually that involves walking in and threatening an attendant with bodily harm, sometimes with a weapon. You're also taking someone else's property (the cash in the register) directly, instead of being paid for work you actually performed, though in this case that work was performed in illegal and unethical ways.

I don't approve at all of what this guy did (and especially not of how he did it), and it's unfortunate that he pocketed a lot of money doing it, but in terms of effort to dissuade criminals I'm on board with in-person, physical robbery being penalized more harshly than using compromised servers as botnets for spam, even if there is a huge disparity in the gains.

If there's any aspect of this that I think might warrant a harsher penalty its the alleged material support of illegal narcotics sales. It's not clear that the authorities could pin that on him, though, since that's not something he ended up found guilty of.

Re: You know what would be good?

The fact that I get Skype notifications on multiple devices is one of the primary reasons I've used it for so long. That aspect is still working fine for me.

Call quality is still mostly fine for me, but lately my friends on the newer PC client have problems where they are marked off-line even when using their devices. As in they actually find their client has switched to marking them invisible. It's annoying.

I'm also not a fan of the changes they've been made to the UI over the last couple of years. I preferred the simple, time-stamped list of messages. I don't need the colorful chat bubbles. They take up more room, hide time stamps by default, and lump together successive messages that might have been sent hours apart.

As a fairly heavy-duty Python coder

I don't understand the resistance to Python 3.

I *do* understand the issues it causes with legacy codebases, and how the pain involved in porting an existing codebase to the new version (where even possible), may not make sense. This, in turn, may result in needing to write new code in older versions of Python so they work with these legacy codebases. I have to do this myself, and find it manageable. (I have also successfully ported existing code libraries, though they were all pure Python with no C extensions.)

Aside from maintaining legacy codebases, though, I don't really understand why anyone would prefer the 2.0 branch. As someone who started with Python back in 2.2, I personally find the 3.x dialect's modest syntax changes and its library reorgs sensible, useful, and less obtuse in a number of ways.

Tying in to the discussion of Uncode text, I find the Py3K change to dealing with all strings as Unicode to be far more practical and safe in any context where you have to deal with non-ASCII data. Which these days means just about anything that touches text.

And finally, though I love working with Python, no, I'm not the biggest fan of the syntactic relevance of white space. I deal with it without serious issue, but I wouldn't mind at all if I didn't have to deal with it. That said, I really think anyone who can't deal with it doesn't really want to try. If interns opening and closing a Python source file in a text editor is actually a problem, that's pointing out a pretty serious deficiency in source code management. I'll agree it'd be much nicer to not need to worry about it, but come on, really.

A Great Read

I was too late a comer to things El Reg to know all that Mr. Haines had worked on here. My thanks for sharing that and so much more about him, especially with so many contributing views. Learning all that he'd done and been was quite inspiring.

Ah, you're quite correct, Ruli. At least Nougat versions of Allo do have voice recognition for speaking to the Assistant built in, but on review, it's not clear that this is what the Google blog article that I linked (and that seems like the root source of this El Reg article) is talking about.

@JakeMS, I looked at KeePassX, but since it seems to have no plug-in support, there wasn't an obvious advantage to using it. (Perhaps not installing mono would count as an advantage for some, but it was already installed on my machine.)

"I recently decided to use a password manager and chose KeePass, primarily because it's open-source but also because it doesn't do any cloud stuff on your behalf. Unfortunately I've found that I end up using a cloud service to sync the keys database across all my devices - so I don't know how much better off I am now..."

You should be better off in the sense that you don't need to sign into a remote system over the wire in order to unlock the password database. In theory, someone needs to compromise you and/or your local system in order to obtain your master password and/or key file(s).

KeePass was recently in the news here for being one of the pieces of software the EU set aside money with which to fund a code audit.

KeePass works on Linux. To my great surprise, you can actually directly pass the KeePass Windows binary's path to the "mono" command and it will work, as long you have the right runtime libraries installed.

Unfortunately, most of the KP file synch plugins don't seem to share that cross-platform mojo, in my experience.

With regards to KeeFarce, if something has the permissions needed to manage DLL injection, you are almost certainly well and truly hosed no matter how securely the KeePass application was written. Someone is running processes on your local machine as you or as an admin. Someone gaining this level of access for the express purpose of obtaining your passwords will likely be able to pop *any* password management scheme you where you actually access the password repository on that machine.

Re: Well, here's your problem

The "partial autonomy" question is indeed interesting. This is actually recognized as an issue with commercial jet pilots, based on the outcomes of investigations into (rare) crashes. One of the conclusions is that the pilots have become so accustomed to not having to manage airplanes in flight that some crashes were, in essence, *caused* by the pilots doing the wrong thing when the autonomous system needed them to take over.

And those are people who actually have quite a lot of training, in stark contrast to the typical automobile driver. Granted, most of us on the road today probably have a lot of man-hours spent handling a car, so one would hope we wouldn't be hopeless at responding if our car suddenly screamed at us to do something. But we're talking about a future scenario where most of the man-hours spent in the driver's seat of a car will be spent *not* actually driving it, by design. Given how pants we are at that sort of thing today (collectively), with the long experience driving many of us have, needing to react in a crisis when you're not actually used to doing anything with the car doesn't inspire great confidence in me.

If the semi-autonomous thing is meant to just be a stop-gap until the systems are so good they really don't need us to ever step in, well, let's hope these things actually improve at self-driving faster than our driving skills atrophy due to over-reliance borne of them handling "everyday" driving properly.

It's not just Twitter

Inspired by the article, I tried looking up "Teresa May" on Google. At first, it decided I must surely mean "Theresa May", but at least offered to let me perform what I actually asked in case I really had meant it.

Even when I confirmed what I meant, though, the results were dominated by the UK's new PM. So I tried adding "-Theresa", at which point I was offered matches only for "Teresa May" ... several which were still actually talking about the PM.

Switching that last result over to image search, about half of the first row of images are of the PM. Which has to be quite deflating to anyone actually hoping for pics of the soft porn star.

"Despite all evidence to the contrary, private cloud pundits keep telling us that data governance and application performance will keep workloads firmly entrenched in private data centres, or will capitulate to public cloud with a hybrid model. Of course, Salesforce’s decision to build on AWS calls into question these cosy platitudes"

I think you have to be a little careful here. Salesforce is, itself, a cloud service provider, already committed to the paradigm of doing everything in the cloud almost by definition. Their customers are companies who have already conquered the various barriers to 3rd-party cloud service adoption. It probably matters little to those customers if their data is in Salesforce's own data centers, or some other ?aaS provider's, except as a secondary concern about how reliable and secure it makes Salesforce's own services.

It seems to me that an enterprise that's a potential direct AWS/Azure consumer can have barriers to cloud adoption that Salesforce probably did not. If you're a financial services company, how does cloud adoption fit with the Safe Harbor mess? Is your data highly confidential or even top secret? Do your local internet access provide bandwidth suitable to the scale of data you need to push to the cloud (or pull back) daily?

These are edge cases, perhaps, and edge cases may not be something a sound business model can stand on, but there are also probably better examples I'm not thinking of.

Salesforce's decision is big, but I'm not sure it means exactly what I read the quote above as suggesting it means. "If Salesforce can do it, anyone can" doesn't quite seem the right conclusion to me, because if we assume there are any cases to make for doing some things in-house, I'm not sure any of them would have applied to Salesforce to start with. For them, the decision may have been a no-brainer in ways it wouldn't be for everyone.

Re: It's an interesting bit of enconomics

It also seems possible to me that the meteoric iron was just more convenient to obtain in workable quantities. Compared to your average terrestrial / telluric deposit of iron ore that's worked its way up from the mantle, it seems to me a meteoric deposit from a modestly-sized meteor will be much more accessible from the surface of the planet, and much more likely to appear as a large lump of mostly metal.

I've found a couple of fist-sized meteorites of in my life, both made of significant fraction of solid metal, and I wasn't even looking for them. Atypical perhaps, but it might suggest its relatively easier to come by than underground ore of similar metallic chunkiness.

And, as we presume the ancient Egyptians didn't know much about alloys of iron, it also probably would have been handy for the meteoric variant to have lots of nickel in it. This knife has something approaching 10x the ratio of nickel to iron you usually find in terrestrial ore, making it much more resistant to corrosion without any extra input. It might have also made it tougher than low-carbon telluric iron, though I'm hardly sure; it might have had too high a ratio of nickel for that.

Re: They are none of those...

I find myself wondering how such wildly divergent sets of experiences and expectations with the same OSes develop.

In the world I have worked in for 20+ years, there's been no such distinction as the one being made here between daemons and processes . A "daemon" is loosely used to describe any process that doesn't end when its controlling terminal ends. In my experience, it's not a term reserved for init-managed processes.

In my world, only daemons expected to start with the OS are put in the init system. Not all long-running, daemonic applications are allowed to start just because the OS started, as this could cause serious problems depending on the application and server in question. For direct start/stop control, these applications have control scripts that nohup or other-wise double-fork them into the background specifically so they will not end with the controlling terminal. They aren't run as root or as a "plain" user, but as an application-specific, generic account, usually using sudo or something similar. If you don't disconnect the process from the terminal of someone who started it, it will die, so every control script or program does this.

I work in IT for an extremely large multinational, and to my knowledge no one there manages long-running Unix / Linux applications any other way than I describe above. If they start with the server, they go in an init config. Otherwise, they are, at some level, controlled by programs doing the equivalent of nohup specifically so they can be started "by hand". This is considered completely normal and not in any way onerous or dangerous, assuming privilege to run the control scripts is managed correctly (which is itself not very onerous, though not everyone does it as they should).

Separately, and on my specific team, we have scheduled jobs that sometimes fail in ways that result in a need to re-run them manually. They are not long-running daemons in the sense of a web or database server, but they do take 4-6 hours to run to completion. We're not going to stick batch jobs in the init config, and we're not going to stay logged in for that long babysitting the process. Even if we have to manually check on its success when it's done, we don't want it to terminate mid-stream because our shell timed out or VPN dropped. We nohup the job and expect it to stay running.

I don't know why anyone would think it's hard to control processes started this way. All you need is a PID and maybe sudo access to a control script that can use it. Except for some edge cases, it's pretty easy to either store the PID when you launch the process, or you can go find it after the fact with 'ps'. Sure, an init system that tracks the PID is cleaner and takes that bookkeeping off your hands, and would probably deal with those edge cases, but is that really considered that valuable in general?

So the idea that this systemd behavior would be normal and expected for a Unix-like system is just incredibly strange to me. It's normal for Windows, not Unix/Linux. Based on the IT world I've been in, it feels like a solution to a problem no one ever mentioned.

Re: Kevin Costner???

Re: first thing first

Yeah, that was my thought as well. The Windows version, at least as described in this article, seems like a fairly pedestrian Windows Explorer extension. It sounds like the OSX version is the one with an eyebrow-raising implementation. Aware of its implementation, I wouldn't be very keen to install that either.

Responses like Dropbox's, which we see all the time from vendors, saying how they've run it for however long and "battle tested" it, annoy me greatly. That's absence of evidence that it's exploitable, which is not the same thing as evidence that it's secure. Saying they had it penetration tested and/or externally reviewed would still not be iron-clad proof of security, but would at least better suggest they really understood the potential risks of their design and took serious steps to mitigate them.

My work organization forces us to change passwords every 90 days. It also enforces rules that make tend to make the passwords we use hard to remember, forcing limits on character reuse, sequences, and requirements for special characters. It also won't let us reuse any of our past *ten* passwords, and it can tell if you are just making small adjustments. Password_1 going to Password_2 won't fly.

I sort of see the point. It is, after all, the password associated with our core corporate identity. We use it to sign in to just about everything, often including systems where we have privileged access. So nicking the password of the right people would be very powerful. Still, even half of 90 days is a long time to have someone's password, and most of the ways of getting it (malware installed via phishing) would probably be able to get the new one even if it was reset.

We can't install our own software on our PCs (for good reason) and there's no company package for a password manager. (There ought to be, IMO.)

I ended up finding a password pattern that I could memorize (through mnemonics) that met the password requirements. I also figured out how to mutate it very slightly each time I have to update it in a way which passes the history limits and is easy for me to keep track of.

I honestly have no idea what most people at my company do to manage their passwords. I'd bet money an awful lot of them write them down. Some I know are probably smart enough to use good password managers on their personal phones.

Re: The ones who've adopted Chip 'n Pin

Re: The ones who've adopted Chip 'n Pin

Debit cards have used PIN since as long as I've owned one, and long before they had chips. And the bank where I maintain my deposits sent me a chipped debit card long before the bank with whom I have a credit card did.

The ones who've adopted Chip 'n Pin

... are mostly the retailers that got hit hard with fraud, like Target and Home Depot.

And they still aren't "'n Pin". They use the chip but have you sign your name. I'm not sure what that's about, other maybe than some procedural/cultural inertia. It's always been ridiculously rare for anyone to validate your signature against your card, so I wish they'd get on with it.

Not until someone makes me, and maybe not even then

I already turn my nose up at the insurance companies that offer these little dongles that monitor my speed and driving habits. I'll tell you straight up, I'm very sure that's *not* going to earn me lower rates. Despite the fact that I've only had one accident, ever, over 20 years ago, and it was a low-speed fender-bender.

But even if it would lower my rates, I view paying more for not being monitored as just fine, exactly how I view things like paying a subscription rather than being tracked on-line to get "free" content. Until someone legislates that cars must have these things built in (and I fully expect that day to come), the folks who'd like me to use them can get stuffed. And maybe they can still get stuffed even once the cars come with them, depending on how hard or illegal they are to disable.

The idea, then, that I'd willingly choose to pay a *third party* to curate this data is mind-boggling ludicrous to me. Odds are good there's already a third party involved in the branded equivalents that the insurance companies offer directly, but why would I want to pick a middleman like that for myself?

The rest of the things this doodad purports to do sound like a solution desperately seeking problems. I at least see the utility in the insurance thing, even if I disagree with the wisdom of taking advantage of it.

My team runs our company's incident/problem/change solution. (Yes, the irony burns.) A few years back, we had tables in a QA DB that we no longer needed. DB administration at that level is managed by a dedicated DBA team, not the application team. We sent them a request for the table drops and, knowing our prod and test DBs had nothing to do with one another, thought nothing more of it.

Except, unbeknownst to us, the tool the DBAs use to perform such tasks connected to both test and prod systems alike, and the over-eager person involved issued DROP CASCADE against the tables in question *everwhere*. In the middle of the US morning / EU afternoon.

The only reason this did not completely destroy our production OLTP DB was that there were locks in play because of our level of user concurrency. (Logs later showed that the DBA actually tried the deletes several times when they failed.) Our prod reporting DB instance had no such protection and critical tables were wiped out. Restoring that took a long time because the tables were huge and, at the time, the reporting DB schema was not a 1:1 match with the OLTP system. (You can do that with fancier replication tools.) The reporting instance had to be restored from remote backup, which literally took days. Fortunately, for the duration, we were able to point most of our BAU features that relied on the reporting instance to the OLTP instance instead, accepting the modest risk of OLTP performance impact to keep important things working.

Happily, this event did produce both process and architecture changes in the way the DBA support tools were used and set up. And, probably, at least one staffing change. o_O

Re: El Reg is a customer of CloudFlare and uses its content-distribution network.

Eh? I quite regularly visit sites protected by CloudFlare over Tor, and have done so within the last couple of days. I typically have to answer a captcha to convince it I'm not some kind of bot, but it lets me in as long as I can do that.

Re: So, exactly...

Dropping the root filesystem (presumably from an installer boot) is fine. It's actually modifying the /sys/firmware/efi/efivar contents specifically that causes the problem. In other words, modifying files in there is translated into changing UEFI variables, and deleting files in there being translated into deleting the UEFI variables.

What if the interface actually needs to change?

Wholesale change in interfaces does happen in the real world. Eventually old versions of interfaces need to evolve so extensively that the old interfaces need to be demised. Perhaps there are new technologies or standards to leverage. SOAP being replaced with REST, say. Or perhaps there's just been a broad design shift in what your service needs to do, affecting all its interfaces. I've seen both happen in environments I support, though the services weren't "micro" in the DevOps sense. Surely similar considerations still apply, though.

I guess in cases like this you can transitionally support both interfaces from your (micro)service (or add the new interface as a new service) and demise the old interface later.

That's still something you can't *just* cover with testing in a sufficiently complex (i.e. enterprise) environment, since you have to make very sure you know who all your consumers are before you demise older interfaces they might be using. Appropriate logging in your production instance can be invaluable here, so you can see who is actually making requests, to reach out to them to make sure they're ready (and have tested) for their applications to consume your new interface.

This doesn't mean the whole DevOps approach doesn't have merits, but at least as I see it discussed in brief in articles like this, the shiny, happy world they describe seems like would only really manifest in environments where its easy to manage all the service dependencies.

I get automated phone calls from them constantly, which go to voicemail and get deleted. They have not injected anything into my browsing experience so far as I know, but then I scour my browsing experience pretty hard, so it's possible I'm just stripping it out and never seeing it.

They also pestered my parents, who have limited tech savvy, and they were mislead into thinking that getting the new modem would actually speed up their experience. That wasn't the case because their package had a performance cap below that of their old modem, so upgrading basically did nothing for them. Given my folks' level of expertise, I doubt the misleading was intentional, but I also doubt Comcast did anything to make the reality terribly clear.

I've no intention of taking their upgrade, as my modem works fine and I do nothing with my net that my existing bandwidth isn't overkill for. I don't want their WiFi. When I do upgrade, it'll be to a model I buy. The economics of that aren't great - renting their modem is cheap enough that buying my own will take a while to pay for itself. The main thing I care about is the device I get will be much more under my control (as much as anything is these days). Also, I read about a lot of people having performance and stability issues with the xfinity branded modems, which makes me want to stay far away.

Colossal ignorance

All they're going to do is surveil ordinary people who most likely have nothing sinister going on, while halfway-intelligent criminals and/or terrorists will simply use software the state does not control.

Sure, such measures may allow them to pluck some low-hanging fruit in the form of catching on to boneheads who have poor understanding of technology or operational security. But is that really worth such whole-sale risks to civil liberty? While the really scary people laugh and carry on?

Re: no matter what MS force on us

"Why would anyone want a 'fix' to stop important security updates & fixes? Anyone declining updates puts themselves at risk as well as others."

Because some of us have learned that accepting patches sight-unseen is a proven recipe for disaster, and that it's much smarter to wait a week or so and see what complaints pop up to make sure that any given patch will not screw up our systems. This isn't some hypothetical risk - it happened several times this year alone.

If you read the article you should see the concern here isn't for enterprises with test labs and a release cycle, who get pretty strict control over how these updates roll out. This is a concern for home and SMB installations.

Obviously what one does should depend on the severity of the vulnerability and whether it's known to be exploited in the wild, but those of us who manage our own patch processing (even "just" on home systems) should have the ability to make that risk assessment.

Kudos

When I saw the headline I was quite sure this was an April Fool's prank, but I must say that the first first page of the article had me wondering for a bit. I have to give credit to the author for the part about the missing cores in particular - the telling of that part feels quite plausible. Up until you get to the bits about Krakatoa + Chernobyl x1000 (the details of which are happily consigned to page two), I think this could fool a lot of people. I agree that we might see this republished in seriousness, especially if they don't read the whole thing.

I "knew" a Troll that fit this bill...

...but I tend to agree with a lot of other the commenters that most folks many readily consider trolls aren't particularly sadistic or even clearly narcissists. Now, none of folks I knew well enough to think I had any insight in their true personality were people I truly knew well, nor am I an expert in psychology, so perhaps these traits were there but subtle. Some of them were quite bright though, and were capable of Machiavellian twisting of a thread of discussion, though I am not at all sure any of them were doing so with any specific plan in mind beyond to mess with people.

One, though, the one I refer to in my reply title, did seen a fit for most of characteristic mentioned in this poll/study. He was extremely narcissistic - he simply could not ever admit to being wrong, even when faced with incontrovertible proof that something he posted was false. He spent a lot of his time talking down to people, posting mainly to disagree with folks and always in a disagreeable way, which variously fits the psychopathic and sadistic labels. And, my God, actually engaging in debate with him was like falling down a rabbit hole designed by Daedalus. He would constantly dance and twist and subvert, and if you weren't careful, after about 10 posts you were arguing about something totally different. This was often related to his refusal to accept lines of logic that undercut his argument - he would would drag the discussion away from such things as a red herring debate tactic. He was quite good at it, using an ever-evolving, subtle shift through of sidebar arguments seeming related to the topic at the time, but accumulating to pull things every further into a different area entirely.

To this day, I don't know if he was a fairly brilliant troll or someone who was deeply disturbed and lashing out online because he could. I lean towards the latter, though.

Interesting to see if other services follow

When I first got and Android phone, signing it up for Google Play also signed me up for cross-sell services like GMail, Drive and, yes, Google+. I do use Drive, but not the rest.

I don't use social media much to start with, and what I use it for, Facebook is sufficient and also where everyone I want to interact with in that way is found. Google+ thus served no purpose, and I considered it more unwanted "attack surface" for my online presence than anything, though I don't mean to suggest I was deeply paranoid about it.

Thus it was that I was pleased to learn how to disable my public Google+ profile. If anyone didn't know that was possible, ironically the best way to find the instructions is probably to Google "delete Google+ profile". You do want to read the instructions - some Google products are intimately linked and deleting your profile can delete other data, too, but most of the mainstream services like GMail and Drive are unaffected.

So I do wonder if they'll unlink automatically creating a Google+ account from any other of their offerings, since it's clearly not deeply integrated.

No...

How do you think the assistant is going to *do* all those things, and get all that data?

When I need directions, I still want to *look* at a map. I can figure out what I need to do much faster by doing that than by having a virtual conversation with my device. Having the map at hand calls for an app, even if it's an app the assistant-maker also built, and the assistant can directly integrate with it.

And when I have had an instant messaging chat with someone and am trying to remember the name of the person they told me to ask for at the local office? Well, I suppose it's possible a *really* damn impressive assistant could get that from the chat, though at that point I'd start to wonder what they'd need me for. But where are the chats stored? Sounds like the chats are stored in an app, to me. Did I have an IM chat with my friend via the assistant? Sounds like the chat needs an app interface of some kind.

How is an assistant going to act as a remote for my movie streaming? Do we all really want to *have* to talk to the device to look at a bit of action frame by frame?

If I want to connect to a new, walled-garden media streaming service a-la Netflix or Spotify, how is my assistant going to achieve that without some sort of app?

No, assistants are not going to be the death of apps. Apps may be replaced by something else someday, if web-rendered apps, exposed APIs, cloud-based screen rending and so forth keeps advancing. But even then the assistant is going to need to be able to integrate with whatever thing the apps become do do new things. The assistant won't have killed the apps - they'll have changed on their own for largely unrelated reasons.

I'm sure you were being sarcastic here, but...

Simple. If they have to act responsibly, that's a barrier to arbitrary action, usually intended to benefit them financially in some way, even if indirectly. I very much doubt Facebook worked with these folks purely out of interest in science, even if they had received no or minimal compensation. Knowing these sorts of things help them figure out how best to monetize their users.

If companies and/or academics working with them had to request consent or, possibly, be completely barred from such research, then this means they get to investigate how to monetize more slowly, or not at all, and they will resort to exactly this sort of red-herring arguments to try and hedge against that risk.

The separation of concerns seems very thin

Other posters have mentioned this, but I'll pile in. If some company like Google has a wide-ranging amount of information about my interests, my communications and my movements, it's not much consolation to me that these private companies don't want to abuse that power where the state government might. The reason is that the government has the power to demand that information from the company (or to take it without their knowledge) for the sake of whatever it is the government might want to investigate me over.

As we've seen with the Snowden releases in the US in particular, the very act of the government tapping corporate intelligence stores can be contrived to occur in such a way that almost no one outside the channels that make it possible knows about it, and anyone involved who would like to make it public is under threat of severe criminal prosecution should they try.

It's fine and well that our governments have not not, seemingly and so far, meaningfully the abused civil rights of their citizens using the information they now have access to. That is not a sufficient defense of the practice. The reason democratic nations have historically sought to reign in the knowledge freely available to a government's apparatus about the people governed is to limit the *possibility* of government abuse.

Quite simply, if a system that can be abused is left in place long enough, two things happen. One is that many of the governed people become inured to it, assuming it's OK because "it's always been like that". The other thing, which often comes only after the first is established, is that someone *does* abuse the system. It's human nature - either someone eventually won't be able to resist committing abuse, or someone will seek a position of power *specifically* because they recognize the potential for abuse they can execute.

As a species, we humans like to live under the conceit that conditions we enjoy now will persist into the future without bound - that because no historically decent government will ever change to be otherwise. I think this is imminently foolish.

I'm hardly a doom-sayer, but it's hardly impossible for me to imagine future situations of civil disorder, most believably due to some natural disaster or resource constraint (water, power, food, etc.), where governments of what are today democratic and free societies might resort to more totalitarian means simply to try and keep things under control. (Martial law.) In situations like this I believe you very much would not want to mix in such abuse-prone tools such as a way to track basically everyone all the time (pervasive cameras, facial recognition, cell phones, centrally managed driverless cars, etc.) It's unwise to trust leaders with such tools to do the right thing in situations where civil rights are so specifically curtailed History does not show good precedent.

On that note, one thing I'll disagree with in the original article is the notion that we owe Orwell for the caution of people my generation and older. While 1984 certainly stood out for some time and doubtless influenced many readers, for cautious people I know it is real, historical events that serve as more sobering reminders of what abusive governments can do with the power of extensive information about the people they govern. The examples set by the Soviet communist party, Nazi Germany, and the Red Scare in the US are much more frightening to me than any fiction. Imagine those regimes or movements with access to the information they could gain on their citizens today, especially if those citizens were raised to use the internet with limited caution.

Hope for the best, but plan for the worst. Enabling pervasive surveillance is unwise, even if it is not the government who directly surveils us..

The jobs do exist, but I've no idea how common they are

I am in the US, but I work for an international company for which the UK members have a strong leadership presence. My boss' boss is in the UK.

I'm a generalist. I know something about a lot of different things, can use that to solve lots of problems, or create lots of solutions. And I've got a job where that's basically what I do professionally, where the breadth of my skills is basically specifically why I'm valued, and I'm paid very well. I've been where I'm at for some time, though, so I can't speak to how easy it is to find a job like mine, and it's something I do worry about should this job disappear or become unsavory. I *can* tell you my team wishes we could find more people with a breadth of skills.

Where I fit in best is in a place where specialists exist in their own silos. You have developers, DBAs, sysadmins, storage teams, and networking gurus. In places that divide specialties up like that, you often benefit from someone who is a bit like a business analyst, except instead of being the interface between developers and customers, they face the other direction, interfacing between developers and infrastructure / middleware.

What we find is that the developers often are wildly ignorant of the implications of the system's (virtual) physical design. The infrastructure teams often have no time to learn the ins and outs of the applications, in order to tune their systems for them. I help the developers create systems that won't be rubbish on the basis of the systems on which they run, and help the infrastructure folks design hardware that won't be rubbish for the needs of the application.

The challenge is in finding an organization that values this role. Not everyone does, and that's clear even within my company. What seems to make the tuning and problem solving skills valuable to people is when they're strapped for budget and they need to expand their system or make their existing scale of system run better. Tuning things can increase concurrent users on existing footprint or reduce infrastructure for same performance. And even in a cloudy context, the ability to achieve those things can be valued. But I fear that may be rare.

I would never do project management. It has nothing to do with why I'm in IT, and requires primarily the exercise of people skills, not technical ones. If I lost this position, I would look for a job as a systems architect - someone who looks at the big picture of software, infrastructure, APIs and whatnot and assembles it into a solution. I see a PM as someone who drives all the people involved to implement that vision. I would want to be the person creating the vision itself.

Canonical feels like the Apple of Linux Distros

I would like to see a Linux OS take off and get real market share and mind share in the broader device market, but as a power user, I would never personally want that Linux flavor to be Ubuntu. Of late their party line is that they know what the users want better than the users do. And *maybe* they do for users in general, or for new users that (somehow) find themselves using a device running Linux. But they sure as hell don't know what I like, because I don't like a lot of high-profile things they've done in recent Ubuntu releases.

That attitude of knowing what I want/need more than I do myself has been classic Apple for ages now, and it always kept me away from them. That attitude was recently adopted to much angst of late by Microsoft with TIFKAM, and it kept me away from Windows 8. And it keeps me away from things Canonical the same way.

You can be pioneering without immediately throwing old paradigms under the bus. You can be progressive without remaking *everything* from scratch without a smooth transition. Does it take longer? Most likely. Does it cost more? Possibly. Will it give you a happy base of users who you help through the transition? I think it would.

It's great to be looking at the long road where today's 6-year-olds will be the device consumers of the future, but the fact is that right now we've got a ton of users bridging the desktop and phone/tablet paradigms. Making *them* want to use your product would get you significant market share which those 6-year-olds would grow up seeing. I don't understand the strategies that seem to decide those transitional people are irrelevant and/or can be made to change their opinions en masse. Given how poorly it seems to have gone in general, I think I'm probably not alone in feeling that way.

Re: At the risk of downvotes

I'm not sure the comparison with what happened to minis is 100% apt. That seems to compare better withpredicting that in 5-10 years those using PCs might have them based around low-power ARM chips, which is certainly not impossible. The comparison between slabs and PCs (and notebooks) is as much about form factor (keyboard+mouse, etc. vs pure touch) and user interface as much about the architecture of the system.

Now, the shift between on-client and remote processing by your application is possibly apt in that comparison, but that is actually not wholly integral to the PC vs slab debate, at least IMO. Modern slabs come with enough grunt to run certain things locally rather well *if you buy a high-end one*.

The migration back to the "cloud" version of a client-server model for various compute is partially about keeping the price of slabs low for growth into developing markets but also about control (vendor lock-in) by large corporations, the attraction of locking people into subscriptions versus one-off purchases, benefits of economies of scale, and the idea that it's nigh impossible to pirate stuff that doesn't execute locally.