Posts by billdehaan

Page:

Re: The concept is not reall dead if you ask me

Likewise. In fact, oddly enough, despite my frequenting /r/Toronto and /r/Mississauga, I actually ended up meeting with someone from /r/Pebble to buy a used smartwatch last week.

Of course, with BBSes, if they were linked (FidoNet, PCBoard, Opus, RIME, or other), you were just as likely to be talking with someone from the other side of the planet as the other side of the street, too.

Re: The concept is not reall dead if you ask me

Despite being an old fart, I didn't get onto Reddit until about a year ago, when a search for a technical question linked to a Reddit posting that I wanted to follow up on.

Looking into it a bit deeper, I found that the structure of Reddit follows that of the Usenet more closely than BBSes. It's got a similar hierarchy, similar moderator structure, and similar layout. Also the same problems that historically plagued usenet are still around on Reddit today.

I found it amusing, since usenet today doesn't really resemble the usenet of 1990 any more, being more of a repository of binaries than discussion groups now. Reddit lets people link to content, and post images/videos as thread starters, but it's more discussion based than usenet is now.

Re: Not dead yet

I'm unable to understand why email groups are pushed into forums

They're pushed into forums because for nontechnical people, "the web" and "the internet" are interchangeable terms.

Hell, for most tech support, they are interchangeable terms. I still remember trying to get support for Rogers' (major Canadian telco) usenet server back in the early 2000's, only to find that before I could get past the level one support, I had to explain to the level one support tech what usenet was. Even the knowledgeable ones said things like "oh, I know that, it's like a web board, right?".

There are wonderful protocols like RSS and NNTP that solve certain types of problems quickly, efficiently, and elegantly. And for the technically inclined (most Register readers would reach the bar), they are obvious. But for the mundanes, the average user on the street, if it's not on the web, or there isn't a simple IOS/Android app for it, they've already lost interest before you can explain the benefits of it.

Re: Yay landfill!

Absolutely they are used.

My phone has a GPS, and also gets FM traffic updates, so it doesn't require data. It's also 5" in size, and has a couple of features cell phones don't seem to have.

But more importantly, it (a) is sunlight readable, and has a non-reflective screen, something that no phone seems to have. And (b) it works based on pressure points, so it doesn't require skin contact. That may not sound like much, but in a Canadian winter, the ability to use your GPS with your gloves on is a selling point.

And that's just consumer GPS. Things like specialty trucker GPS don't really have cell phone apps that can replace them yet.

The last project named after a CEO's kid was the Ford Edsel

Like the Edsel, the Lisa was a dud that had some interesting ideas, but was a horror show for both customers and the company.

I actually used one of the first generation models. The experience was very similar to driving an Edsel, I imagine.

I was a student at a university that had one. How they got it, I don't know; there certainly wasn't any budget for such a thing. I suspect it was a demo/promotional unit that somehow ended up at a university as an educational credit, or a tax writeoff or something like that.

In any event, using it could be summed up by the infamous joke making the rounds at Apple at the time.

Person1: Knock, Knock

Person2: Who's there?

Person1: sits motionless, and unresponsive, for 90 seconds

Person1: "Lisa" (as if the previous 90 seconds had not happened)

Lisa was an interesting proof of concept of some neat technology (for 1983) which eventually did become mainstream. But as a product? Forget it.

They didn't take out Nokia

Nokia took itself out.

And I say that as a Nokia fanboy who still has his 5130, C3, 5180 Music, and Lumia 920, all still working. And that doesn't count the four other CDMA carrier-locked handsets (one is analog only, from 1998 or so) that are still lying around.

Nokia, like Blackberry, was a fantastic cell phone manufacturer. But when the iPhone dropped in 2007, the market ceased being about phones with computer features, and became a market for portable computers with phone features, and neither Nokia nor Blackberry were prepared for the switch.

Sure, there was Meego, which was always two or three months away from being ready, and all sorts of other Hail Mary pass fantasies, but there was never really a realistic plan for a consumer friendly touchscreen smartphone with the usability of the iPhone.

S60 was an engineering marvel, and the usability of the interface was tolerable when there was nothing to compare it against in 2006, but once the iPhone showed up, Nokia lost the usability race, and they needed something different.

Whether they switched to Tizen, Android, or Windows Phone, Nokia was pretty much reduced to being a minor player. They could have switched to Android, but they didn't think they could compete with Samsung, and they were right. Switching to Windows Phone, they figured they could win that market, which they did. The problem was that they won a market that really wasn't worth winning.

I store my high security passwords in a Keepass wallet that's on a VeraCrypt volume.

I store my really high security passwords (ie. banking info) in a Keepass wallet that's on a VeraCrypt volume that's on a portable drive that's disconnected and locked in a fireproof safe when not being used.

Password managers don't have to be networked applications. There are many standalone password wallets that are essentially just password-protected local files.

My favourite is Keepass (link). It's free, open source, and available on numerous platforms, both desktop and mobile. And most importantly, it's been audited by security experts like nobody's business.

I agree, "cloud-based password manager" can be synonymous with "single point of failure". But if your passwords are stored in an encrypted file on your Windows/Mac/Linux/Android/IOS box/tablet/phone, they're going to have to be able to access it either physically or remotely before they can even start cracking the password file.

Been there, done that, had the stitches pulled

Oh, this story definitely rings a bell.

Back around 1988 or so, I worked in a shared lab environment. There was no network, and it was very much "every man for himself" development.

Every group had its' own naming convention, naturally. And this was in the days of DOS, with 8.3 file and directory names. So, I created a root directory DEVTEAMS, under which I'd put a READ.ME file stating that the subdirectories could be used by anyone. I figured that way, anyone who backed up the DEVTEAMS directory would back up every group's work, and we'd have multiple backups. Backups were done with floppies, which were painfully slow, so I tried to compartmentalize all the development into one folder tree.

One machine had both a transputer card (remember those?) and a special video card. My group was doing video work; another group was doing work on the transputer, so we created \DEVTEAMS\VIDPROJ for the video project. I even created \DEVTEAMS\TRANSPTR for the other team, though they never used it.

Now, for those unfamiliar with transputers, they are/were massively parallel processors, which used a non-sequential language called OCCAM to take advantage of this. However, because of this parallelism, source files were not stored sequentially. A 12 line C program could be "hello.c", but the equvalent wasn't "occam.c". Instead, it would be 12 files with names like ~24nkj24.jd8 and the like, which the Occam editor would link into the environment.

One day, we went to run a video test, and discovered that there was less than 2kb free on the 20MB drive. So, one of my teammates cleaned up enough space on the disk to run the test. A day later, the manager of the other group became hysterical that our group had destroyed six months of their work.

Fortunately, my teammate had done a backup of the machine before wiping it, however, the other team's project directory wasn't on the backups.

Where my and other teams used DEVTEAMS, this group decided to go their own way. The decided to use their group member's initials for the directory name. So James, Uri, Norm, and Kwok put all of their transputer files in the subdirectory... C:\JUNK

Yes, on a shared machine, they set up a directory called JUNK, and filled it with 300 binary files with names like $3j5a1.d7x, and were shocked when people looking to clean out dead files didn't realize that those were critical project files.

Although they weren't so critical that their team ever bothered to back them up, of course.

Re: Oh look, they re invented the HTC Touch Diamond.

That's not profit it's theft.

It's whatever the market will bare, sadly.

I, too, want a small, usable cell phone, like the 3.5" or 4.0" phones of old. Things you could throw in your pocket, charge every couple of days (or weeks), rather than mini-tablets that struggle to retain a charge for a day, scratch if you breath on them, require hip holsters or special carrying cases, can't be used with one hand, and cost five times what my Nokias did ten years ago.

I picked up a Samsung S7 refurb over the summer for Cdn$200 (about £115), because at 5.1", it was the smallest usable phone I could find. And at 5.1" it boasted about having a "one-handed mode", as if this were an oddity.

Hard to believe that when Steve Jobs rolled out iPhone 1.0, he actually apologized for the size of the overly large 3.5" phone. Today, phones with the word "compact" in their name clock in at 4.7" or larger.

There are 4" phones available out there, yes. But they're universally horribly underpowered and practically unusable. This Palm iteration is practically the only thing I've seen that even tries to be usable.

If it weren't for the fact that my Pebble smartwatch requires an Android or IOS application to connect to the phone, I would probably switch back to my Nokia C3 or 5130.

Re: Why would anyone tolerate this?

Why would anyone tolerate this?

Inertia.

Back in the old days, I was astounded at the number of blatantly obvious scams that people fell for in email. Never mind the atrocious grammar and spelling of the scam emails, a casual look at the sender field would show you "From: Bill Gates <asdh98y423j4k32hh89@9a7dasdlkj34234.cn>".

And then I saw a mundane friend's PC, where she used Outlook Express. Everyone I knew was running was running the Eudora, or the Bat, or Thunderbird, or Agent, or the like. Even the most unsophisticated user I knew was running the Mozilla mail client. I didn't know anyone who ran Outlook Express.

And yet this mundane did. As did all of her friends. Why? "It came with the computer". When she got a dial-up ISP, the instructions for email were for OE, and so she used OE. And in OE, those spam emails appeared as being from "Bill Gates"; the email address didn't appear.

There are more mundanes that software people. Most will simply use what comes with their machine, or phone, or tablet. Most use webmail now, either GMail, or Outlook, or their ISP. But those that don't will mostly use the first thing they find. And since Windows 10 includes a mail client with it, that's what they will use.

Re: Schadenfreude

Ah, many thanks for the explanation.

I wasn't parsing it phonetically, when I saw "NIC" and "forced download" in the same sentence, I was thinking of the NIC as being the network interface controller doing the download. Unsurprisingly, the Win-10-NIC makes no sense whatsoever in that context.

Back in the 1990s, the joke used to be that "At Microsoft, quality is job 3.1" (based on Ford's then-current slogan "At Ford, quality is job one").

Unfortunately, things have gotten worse, not better. BTC (before the cloud), customers would look at a next-gen product blowing up in deployment, and go "hmm, nope, we'll stay with what we have, thanks". Today, that's not an option, as MS is pushing people off of their existing systems, and into the new tech, whether they like it or not.

That could be beneficial, if MS was releasing solid, secure improvements to the customer stream. But they aren't. They're flailing around, in mad panic, adding features that they want, not what the customer has asked for. And worse, by concentrating on the shiny new features, they're ignoring the core features of the operating system.

I though IBM's OS/2 1.3 CSD 5050, known as "the CSD of doom" was the nadir of consumer operating system rollouts. Even WinME wasn't that bad. But MS has now surpassed it.

When customers are not only unwilling to implement vendor fixes, but discussion fora are filled with customers discussing how to not only bypass, but defeat vendor patches because they're afraid of them, there's a serious problem. Unfortunately, Microsoft doesn't seem to care all that much, based on how they're doubling down on their rollout strategy.

Had a new co-worker with something similar. When he clicked the "Mandatory Training" email, and was reprimanded for clicking on a spammy link. A spammy internal link, but still, he should have forwarded it to the internal security "check link for validity" service, which no one was using.

It turned out he actually had. Being a new employee, he'd followed the policy verbatim.

It turned out that the suspicious link account that you were supposed to forward links to had its' spam filter cranked up to maximum sensitivity. In other words, the suspicious link checker account blocked all incoming links that were suspicious from being seen by the team that was supposed to check them. Which of course explained why "no one used it". People in fact had been using it for months, but all their emails had been deleted before being read.

Management then asked why no one noticed or commented on the fact that IT had not responded to their submissions. "We're so used to being ignored that it didn't seem worth mentioning" was the answer, much to the shock of executives.

And in the same email, too

Several years ago, our IT send out one of their OMG world-is-ending ALL CAPS blanket emails to the company.

To summarize, it said:

"A new malware attack is being spread through malformed URLs in email links. Our firewall is currently not configured to protect against these types of attacks, and we are currently waiting for a fix from the vendor. In the meantime, employees are not, under any circumstances whatsoever to click on any external links. Disciplinary action will be taken against those who fail to comply with this mandate.

You are required to confirm that you have read, and understood this new mandate. You must sign the electronic form at www.externalcorp.com/signatures.asp no later than Friday. Failure to comply will result in disciplinary action, including termination".

Yes, employees were required to click an external link in order to promise not to click on internal links. With both actions being grounds for dismissal.

I've worked on rail line overviews. In the bad old days, you had to install either Visual Studio or Office just to get some god damned necessary DLL, which was absurd. Fortunately, those days are long gone.

One of the most important restrictions we have with these systems is to NOT install third party software on them. They're dedicated display systems, not general-purpose computers.

Re: "the problem was broken hardware and so not its fault"

Yes, I had the same reaction back with OS/2 running on a PS/2 back in the 1990s.

IBM said flat out that the problem wasn't the IBM OS/2 software, but that I should talk with the vendor of the garbage PC, a PS/2 model 80, that was produced by, uh, IBM.

Oh, but that's the "IBM Personal Computer Company", a totally different beast, I was told. Uh huh.

Of course, the IBM PCC in turn then strongly recommended against using "that operating system", and were willing to send me a copy of Windows 95, gratis, to solve my problem.

It looks like once a companies reach a certain size, they all start to act the same way. Back in the 1980s, Microsoft was the scrappy upstart taking on the evil IBM. And then after Apple stumbled and had a near-death experience in the 1990s, it was the scrappy one, fighting the monstrous Microsoft. And then it was virtuous Google (motto: "Don't be evil"). The wheel keeps turning.

There's really something to be said for non-monolithic systems. Maybe I don't want my watch reporting to my cell phone, or my phone taking orders from the tablet, or the tablet being driven by the PC, and the PC beholden to the cloud.

Having competing techs at different levels may mean learning more about each system, and having less integration, but it also means not having to deal with a common fault incapacitating all my systems at the same time.

Re: Not sure about that

I've seen a few, but half as many have switched back.

Back in the late 1990s, a female friend was a graphic design, doing Pagemaker and Photoshop type things I don't pretend to understand, in a Microsoft shop. Most user machines ran either Windows 98 or Windows NT 3.5 for server type functions (backup, firewall, etc).

Her contempt for Windows bordered on the incandescent. She referred to everything as "billware", "Micro$hit", "Internet Exploder", and the like. If only she would be allowed to have a Mac, life would be good.

And then she got one. With her own money. This was back in the days of Mac OS 7. The first shock was the cost. She knew it would be more than a Windows PC, but was shocked when the final cost turned out to be almost three times that of the PC. Unlike MacIntosh, PCs had competition, and that drove the price down. It drove quality down too, if you weren't careful, but you had options.

The second shock was learning that the "megahertz myth" really wasn't. Yes, Steve Jobs had shown a Mac outperforming a PC in a demo, but it was later determined that the PC was basically stripped down, and running an obsolete version. In real world terms, the three times more expensive Mac was roughly half the speed of the PC. And worse, things that were taken for granted in the Windows world, like virtual memory and pre-emptive multitasking, were features that were "coming soon" on the Mac.

Also, the vaunted stability wasn't there. She had more CPU hangs, operating system tossups, and data loss in two months with the Mac than in the past two years on the PC.

But the real offense, to her, was her interactions with the Mac-loving co-workers who had sold her on this. Every complaint she raised was dismissed as "that's just how it is", and they were annoyed that she was complaining. When she asked how they could possibly have told her that this buggy, expensive, data-losing system was superior to what she had, their only answer was "it's not Windows". At which point, they proceeded to list problems with Windows that my friend had never experienced in all the years she'd been using it.

A year later, she bought a newer PC, and got her productivity back. But she wasn't complaining about "billware" anymore. Suddenly, the promised land of milk and honey didn't seem so appealing any more.

The problem is that later, when Mac did become a more serious system, and actually delivered on the promises that had been made earlier, she wouldn't hear of it. "Fool me once", and all that.

I think a lot of people have brand loyalty, and some have brand fanaticism. But a lot more have a completely loathing of a system, and prefer a competitor's system, based solely on what they think it has.

Android and IOS each have their merits over the other. So too to MacOS, Windows, and Linux. Using one, and preferring one, doesn't mean that the others are utter garbage, or that users of those systems are reprehensible losers. It just means that they have different priorities than you may.

Re: Snap.

I've bricked my share of machines over the decades, from embedded video and MP3 players, and simpler time Z80 CP/M boxes to modern i7 based Ubuntu machines. I've even brought down a Control Data Cyber back in the day, and several Vaxen.

However, none of those compares with scrubbing user data at the vendor level. Microsoft has had bad rollouts before that have bricked huge swaths of the user base, at the cost of time and money. So have IBM, Dec, Apple, and many application vendors.

But deleting user data is a different story. And this was not as the result of a user operation, it was inflicted on users by the vendor. That alone makes Microsoft's cockup much worse, and singles them out for well-deserved scorn.

If Apple pushed out an update that locked every iPhone for 24 hours, it would be a disaster as well, but it they were able to return the phones to their previous state, it would be an "outtage". But for people with 80GB of user data, waking up to find that they only have 1GB of user data left, because an unrequested Microsoft update scrubbed 79GB of is an unparalleled screw up, and Microsoft well and truly deserves to have their noses rubbed in it for a decade to come.

And I say that as some who, while not exactly a cheerleader for Microsoft, has been referred to as an "apologist" because I happily ran a Windows Phone for several years.

Screwing up an update is one thing. Deleting user data is something else, and falls into the "you had one job" level of screwup.

Re: I just got back from a rather large data center.

Ah, I've lived this, but never had such a nicely-encapsulated reference for it before, my thanks.

Back in the day when I was doing Serious Defence Work, I was responsible for evaluating, and recommending, compiler purchases. I have a long winded story which I will summarize by saying that a $250,000 Ada compiler was purchased practically on a whim, and approved within two working days, while a Turbo C (not C++, this was 1988 or so) compiler that had a retail price of $49 took over 18 months, and about a 40 page list of required approval signatures.

Being that we were doing Navy work, the common euphemism at the time was that it didn't matter if you're building a dinghy or an aircraft carrier, it was the same amount of paperwork, but we could get you the carrier faster.

Now, I shall simply say it is a "bikeshed" moment, and pass on the link.

Re: "Hurtful"

Likewise the complaint that "parent/child" terminology with respect to processes had to be renamed, because it was hurtful to orphans.

And always, it's never the complainer that's offended, he/she/it is always complaining pre-emptively on behave of others who they believe will be offended. Meanwhile, the supposedly wounded party usually, as you've pointed out, thinks the entire thing is silly.

Back in the day, I had a census taker squeamishly try to take to me about disabilities. I'm blind in one eye (childhood trauma), but I don't consider myself disabled, although the census taker did. It was pathetic watching this bureaucrat trying to be excruciatingly sensitive about the fact that I'm missing an eye, whereas I thought the sensitivity was just laughable.

Actually, in the 1960s, the word "black" was considered offensive by many, and they demanded to be called niggers, instead. Seriously. And then in the 1970s, the word "nigger" was deemed offensive, and the new term was "African-Canadian" (or -American, as appopriate).

The reverse also occurs. The terms "gay" and "queer" were historically insults, until the gay community simply decided to start using the terms themselves, and the words lost the ability to insult.

There's a significant difference between using a word intentionally to insult, and being overly sensitive.

And then, there are things which go beyond parody. Previously, the exemplar for that was the O.J. Simpson case, where the term nigger was treated like it was a crucifix and the rest of the world was vampires, and to prove it, the news media interviewed a famous musician named Easy E. The problem is that Easy E was in a group called NWA, which stood for "Niggers With Attitude". So, you had someone who created a group with the word "nigger" in the title saying that the word was horrible and shocking and anyone who used it belonged in prison. By that logic, he couldn't even say his own band's name.

That was pretty much the tipping point for political correctness. Well, here were are, 23 years later, and it looks like the bar may be raised.

Re: @ billdehaan

As to wordperfect, it was a product that preceded the technology that you seemingly take for granted upon the PC namely GUI, cheap RAM, fast CPU and communications.

Well, yes, that was largely the point.

The application developers of yesteryear spent great amount of time and effort developing installation branch paths and reams of drivers. Today that's all done in the operating system, where it belongs. But that technology that we "take for granted" comes at a cost. And that cost is what a lot of people criticize as bloat.

People complain that today's 16GB machines do the same things as machines did thirty years ago in 640K, But thirty years ago, it was perfectly acceptable to expect end users to know their PC's memory map, the IRQ settings of their PC, and that they be able to juggle device driver load sequences themselves.

As to your video card and you belief that everything happened without some smart people

I never said that. If you can't make your point with lying, your point isn't really worth much.

As for what I was doing thirty years ago, I was working on embedded real time controllers for fighter and commercial aircraft controllers. I have friends still doing that today. If you think writing DSP software allows developers today to be sloppy, and that efficiency is not a requirement, try to write a landing gear controller interface in 768 bytes.

Of course there were brilliant programmers in 1990. And there are brilliant programmers today. And there were horribly inefficient coders in 1990, just as there are today, too. The idea that the industry has taken a step backwards is a myth. People pining for "the good old days" rarely lived through them.

The problem with the "good old days" is that unlike what a lot of people imagine, computers didn't manage to do all the heavy lifting in less memory and processing power than today, not really. What they did was offload the heavy lifting onto the user.

Yes, I remember using WordPerfect on DOS 3.x, and it was awesome. But if you wanted to see what your document would look like when you printed it, you printed it. Eventually, yes, they added preview mode, which would give you a graphic rendition of what it would look like, but you couldn't edit in it, you had to flip back and forth. And it required a WordPerfect specific driver for the video card. And a WordPerfect specific driver for the printer model you had, too.

In 1990, I replaced a video card with a model from a different vendor, and I had to reinstall something like 18 different applications, from scratch, and then reconfigure them each to use the new card. Today, you replace a video card, boot to the lowest resolution, download drivers from the internet, reboot, and you're done.

There's definitely room for improvement in terms of programmer efficiency, but the idea that we're using a thousand times the CPU and memory as we were thirty years ago and it's all being wasted is a fantasy. Back then, less than 5% of the population had a computer, and there was a reason for that. PCs were very arcane, and difficult for non-hobbyists to use. Today, we get a lot of ease of use under the covers. That takes a lot of resources and effort.

The idea that programmers thirty years ago were more brilliant and efficient than the ones today is completely wrong. I know; I was one of them thirty years ago. Believe me, we weren't any smarter or better than the current generation.

When my first gen 2014 Moto G finally started to give up the ghost three months ago, I looked around, and rather than buy another "value" (ie. cheap) phone, I bought myself a Samsung Galaxy S7.

Of course, I bought it refurbished. With a three year warranty (it is a refurb, after all) and taxes, it was still under Cdn$400, which is about £235 as of today's exchange rate. It meets, and in fact by a wide margin it exceeds, my needs.

I've got a voice and SMS pay as you go service, data free, which costs me Cdn$25 (£15) per year for minimum service, though sometimes I use twice, and possibly even three times that.

I know people who spend nearly Cdn$300 a month on their data plan. And that doesn't even include the frigging phone.

Unsurprisingly, many of these people who complain of overage charges see nothing wrong with yakking for 45 minutes on their cell, standing less than 10 feet from a landline that has no cellular limit. Or watch movies on their phone by streaming on the subway every day to work, because it's "too much effort" to remember to download a local copy off Netflix/Amazon the night before. So they end up spending $15 in data charges to stream a movie that's $12 to see in the theater.

And that's Apple's markets. Of course they're going to fleece them for everything they possibly can. I believe it was Barnum who stated that it was morally wrong to not separate the foolish from their money, and that could be Apple's mission statement these days.

The problem is that people who believe sanitizing the language of offensive terms will result in offensiveness.

All it means is that new terms will be used.

Fifty years ago, people missing a leg, or an eye were called "cripples". Then it was decided that term was horribly offensive, and they were to be called "disabled" instead, as it was more sensitive.

Until about twenty years later, when "disabled" was now considered horribly offensive, and the new term "differently abled" was to be used. But that was a joke, because there's nothing "abled" about missing a leg, or an eye, and so people were trying to find a new unoffensive term.

It's a losing battle. It's not like disabilities are going to disappear because you call them something different.

The most common description of this behaviour is "Get woke, go broke".

It's not whether it's a consideration, it's usually when it becomes the consideration, that things go to hell rapidly.

When you're more concerned about offending people than you are about making a good product, it's a losing game. Unless you're an ass, you're not trying to offend people in the first place. And if you do so through ignorance, most people aren't going to be upset.

The people who are upset are the types who get upset by everything. When you see complaints that eating salad is racist, wearing earrings mean you support enslavement of Africans, braiding your hair is a signal that you're a white supremacist, going to a wedding means you think women should be oppressed, etc., you're dealing with people who aren't playing with a full deck.

There are real issues with racism and sexism in the world. Fretting about technical terminology isn't going to change them in the slightest. There are better things to spend time and energy on.

Re: Testing the staff

My place too. We are always told to ignore external domains yet we have to use external domains in the course of our work. Nobody keeps an easily findable list on the Intranet and new ones that we are supposed to use appear from time to time without any formal announcement.

A few years back, IT sent out a near-hysterical email to the entire company. It stated that there was an extremely serious exploit in URL handling, and until this was addressed, until no circumstances were employees to click on unfamiliar links, none whatsoever.

Naturally, the second paragraph followed up with "if you wish more details on the exploit, go to www.microsoft.com/blahblah".

Yes, after about 100 words of why you should not click on links, they followed up with a link to click on.

Of course, this is the same IT that in one company newsletter announced the IE was required and Firefox and Chrome were banned on page 3, and then in the second story on page 7 stated that IE was no longer to be used, Firefox was required, and Chrome would not only be blocked at the gateway, disciplinary action would be taken against anyone who installed it, and finally on page 9 in the third story listed the schedule for the Chrome rollout schedule.

IT's number one complaint in the most recent survey? That they don't get any respect from the rest of company. Imagine that.

Re: Testing the staff

I work for a company who deliberately send spoof emails to staff to see who opens them so they can berate us.

Mine did the same. It was hilarious.

Although the intent was to show upper management that the peons didn't understand the IT issues involved, it actually showed the reverse.

IT sent out an email purporting to be from the parking authority, saying each user (identified by name) owed something like $70 for a month old ticket about parking illegally in the building (identified by address). So, it already had a great deal of personal info. It concluded with a spammy "click here to see the photo the officer took of your car" link.

The idea was to see how many people "foolishly" clicked on the link.

The thing is, we're an Exchange based shop. And the "spam" message arrive, not via the external internet gateway, but internally as an Exchange message. That meant it was sent from an internal source. Who would have that authority? Well, the actual parking authority would. Secondly, the spam email's "click here to see the photo" had a url that pointed to an internal server, by name, within our network.

Something like 45% of the users reverse engineered it, and reported it to IT. Some even escalated it higher, as it looked like our IT infrastructure had been compromised.

Of course, quite a number of us backtraced the internal machine reference to see if it had been breached, with many checking out the URL in sandboxes and virtual machines.

IT's response to all these probes was to say that "45% of users clicked on the link!" to upper management. When asked by upper management "how many of those were done by people who reported it was a scam, who were attempting to reverse engineer it?", IT sort of shuffled their feet and had to admit they had no idea. They were also forced to admit that maybe they should have not sent it internally with valid Exchange credentials, since if those are compromised, people clicking on links is the least of our worries.

In the end, they were forced to admit that, yeah, the entire exercise was pointless. But at least they learned that the user base was more savvy than the IT department...

This song could have been about me

Details differ, but this story is about 90% in sync with one of my own.

In the early 1980s, I was on a mainframe system that had a punchcard interface, and a terminal interface, which was actually just a terminal that simulated the punchcard system. This is important to the story.

The system used 8 different queues, and the terminal queue was only one of them. However, all terminal jobs, for all users, were using the same queue, queue #1. So if 200 users were using terminal jobs in queue #1, if you ran your job in queue #2, it would run much faster.

However, terminals could not use any queue other than queue #2. So, the secret (documented in the manual) was to use the SUBMIT command, to submit the job in another queue. Of course, you'd have to write all of the terminal inputs into the card deck ahead of time so your job didn't get stuck, but once you did, you'd find your job would run in 90 seconds rather than 90 minutes.

Now, at a terminal, you logged in with username/password. When you submitted a job to a queue, you needed to put /USER(username,password) card at the top so the job would log into the queue. A neat trick was that the card deck you submitted was the INPUT file, and you could play with it like a file pointer.

In other words, the following job:

/USER(username,password)

REWIND(INPUT)

COPY(INPUT,OUTPUT)

When submitted would result in the output to your job appearing in your queue, and you would see USERNAME(MYUSERNAME,MYPASSWORD) in clear text. Amusing, but not very useful.

However, the mainframe was networked to another, and when you changed your password on one, it would change it on the other... eventually. So you could run this job to see what your current password was, ie. if the change had propagated over the network yet.

But how does it propagate over the network, I wondered. It turned out it was done as another job in the queue, but was done with the site admin's credentials. So, I wrote a batch job that changed my password, that looked like

/USER

CHANGEPASSWORD(password,newpassword)

REWIND(INPUT)

COPY(INPUT,OUTPUT)

And lo and behold, the following appeared in my batch queue:

USERNAME(myusername,mynewpassword)

CHANGEPASSWORDREQUEST

USERNAME(adminname,adminpassword)

CHANGEUSERPASSWORD(myusername,mynewpassword)

USERNAME(myusername,mynewpassword)

**END JOB**

And lo, I had the adminpassword, in clear text, in my input queue.

The admins denied I could do this. So, I logged in using their password. I was called into the head of network security's office who said no, this was not possible, and then I logged in at a terminal in front of him. He still didn't believe me, and he changed the admin password. I told him I could get it in 10 minutes, and I did.

The end result was "tell anyone about this and you will not only be fired, I will have you killed" or words to that effect.

I had been hoping/expecting that I'd uncovered an implementation issue that they hadn't properly configured, which could be fixed now that they knew of it. Instead, I'd found a design flaw in the network security layer than required an operating system patch. This was $BIGNAME$ corporation, which had mainframes around the world, in sensitive areas (far more sensitive than in the industry I was using it in), and the idea that a low-level user could crack the admin password in under 10 minutes stopped several hearts in the boardroom.

Eight months later, I was called back into the head of network security, and told to try it again. The bug had been addressed in a patch, but it was still being rolled out worldwide, and I was still not to speak of it "ever again". Which, technically, I guess I am, except (a) this story is 30+ years old, (b) the mainframe I refer to is almost entirely obsolete, as is the network it ran on, and (c) the issue would only affect said mainframe whose patch levels aren't at 1982 or so level yet.

Re: What a difference a few generations makes

"Hands on management" is something that actually works.

Indeed. There's something to be said for seeing, rather than hearing what's actually going on.

I worked at a company where the technical disconnect was fairly massive. Engineers were equipped with Core Duo PCs with 2GB of memory (and this was in 2015), which were additionally clogged with IT mandated firewall/antivirus/antipiracy/encryption, all running at maximum priority, while execs had i7 laptops bursting with 32GB of memory and ultrafast SSDs, with all processes exquisitely tuned.

In other words, the people who needed fast computers for their work had machines that were running at a tenth the speed of the executive's machines, which were basically there to read emails and see Powerpoints.

It was always amusing seeing executives watching a presentation, and asking "is there something wrong with your computer? It seems so... slow", only to be told that this was perfectly normal, and people had been screaming about the productivity impact of using garbage equipment for development for years, only to fall on deaf ears.

If any of the management team are hands on, while these sorts of things can still happen, they don't stay for decades without being noticed.

What a difference a few generations makes

One of the reasons that IBM became the behemoth it did was because of the actions of the founder, Thomas J. Watson.

When most members of the company leadership left for the day, they'd take the elevator down to the front entrance, and leave the building, never encountering any of the worker bees. In contrast, when Thomas left, he'd go down the staircase, take his tie off, and wander through the shop floor. Inevitably, he'd strike up a conversation with some floor worker at a lathe or somesuch. Often, the worker wouldn't even know who Thomas was, other than he had a suit. And so he'd be honest with him about what was going on, how likely they were to make the deadlines, and the problems that they were encountering.

Later, when hearing the status reports from other execs, Watson took note of what execs were telling him, compared to what the actual workers had told him. He learned which execs were giving accurate pictures of their projects, and which ones were sugar coating things.

The key thing was that Watson wanted to know what his workers thought, not what their directors thought their vice presidents thought their managers thought their group leaders thought the worker thought. He wanted to know what was going on, and so he talked to his workers directly, and honestly.

The idea of treating the CEO like a visiting dignitary, and dictating behaviour before his or her arrival, is the complete opposite of that mindset. The CEO is not a customer, he/she is not someone that you are trying to impress, the CEO is someone who should be visiting to become informed about the state of the company.

I'm not sure what's worse. The idea that the company is not even hiding the fact that they are trying to impress the CEO, or the fact that CEO takes it as a given.

First day on the job training

As a contractor, I've seen more than my fair share of corporate screwups, given that I was usually being employed to fix them.

The two "Who, Me" type examples I can think of were eerily similar. Both happened to a new employee during his first week (in one case, it was his first *day*). They were different industry (avionics and banking), but the common theme was that new, untrained employees should never be given the ability to do that much damage in the first place.

In the first case, the avionics employee, on day one, was given a computer system and told to run a script that would retrieve the source code, do a build, generate a hex file, which he was to write to floppy, then trot over to the eprom programmer, and burn an emprom with said hex file. He unfortunately inverted a few parameters on the (admittedly both complex and counter intuitive) script, and instead of generating a hex file from source, managed to read an empty hex file, and write it into the source tree, overwriting about three months of unsaved work.

This cost the company in the low six figures, just in terms of the lost work, not even factoring in things like contract delays that were incurred and the like. The new hire, unsurprisingly, full expected to be sacked, but his director (his boss' boss) said "Why would I fire you? I just spend $200K training you!". Of course, said director had choice words as to why an untrained, unsupervised new hire was put in a position where he *could* do so much damage. Questions as to why the script process was undocumented, completely counter intuitive, and capable of overwriting developer source were raised, as was the issue of why three months of developer work was not saved and archived properly in the first place.

The second case was simpler, and somewhat less costly. A new employee doing tech support for a trading floor was given a spot on the trading floor. For his computer, he basically got a box of scraps, and was told to build himself a working computer. The trading floor was token ring, specifically 4mb token ring. In the box of computer parts, there were more than one token ring card, so new hire picked the fastest (I believe it was either 8mb or 12mb, whatever was top end around 1992). He puts his computer, gets into DOS, all is good, and then he plugs in his token ring cable into the wall socket. Within about 100ms, the entire trading floor went offline. Traders do not like going offline. The quoted number was something like $70,000 for every minute that the network is offline, in terms of lost opportunity costs, but of course, that's all projection, not calculable fact. What was calculable was that it took about 15 minutes after new hire disconnected his PC from the ring and the ring was restored.

In both cases, the new hire may well have screwed up, but in both cases, the fault was with their management for giving untrained, unmonitored employees capabilities well above their skill levels.

Re: Twice screwed

I cannot agree more with backups.

Though most of my friends are engineering types, many are married to/derived from/have spawned mundanes. It happens in the best of the families.

I cannot count the number of quintuple levels of backups that have been casually tossed aside, reformatted, lost, or otherwise rendered inoperative, only to have absolute delirium descend when the inevitable occurred and the drive crashed.

I've had users near-hysterical because a laptop drive died (bad MBR and a heating issue to boot, very nasty), taking over a decade of irreplaceable data with it. Through a miracle of boot sector fiddling, and spraying freeze-mist at timed intervals to keep the drive at just the right temperature to not overheat not shut down, we managed to get it going, just barely.

Of course, our attempts to immediately scrape the essential data off to a backup were stymied as the user (who outranked us in the hierarchy by several levels) waved us aside, because she needed to work on the drive RIGHT NOW.

Fortunately, my co-worker, more savvy than I was, had prepared for this. He had a printed-out form ready for her to sign. It stated that she was fully aware the drive was dying, that using it prevented data from being backed up, and that her insistence on using it meant all data could be lost irretrievably.

She signed it, shooed us aside, and went to work on the "fixed" drive. Two hours later, the phone call came in, and no amount of freeze mist, holy water, or the like could put humpty dumpty back together again.

Fortunately, the business critical data had been scraped off (we'd insisted on that), the only things that had been lost were all of the personal things that were on the laptop. Of course, she tried to then escalate the issue because the "useless" techs had not saved her critical work. This apparently included her daughter's thesis, which raised the question of why her work laptop was being used by her daughter in the first place. My co-worker presented the form she had signed, taking full responsibility, and we were lucky enough to work for sane management, and the matter was dropped.

But to this day, I'm certain that that user blames her data loss on us, "bad luck", and learned absolutely nothing from it.

Re: hate that too

A more perfect example of an utterly meaningless ad it would be hard to find.

I'll seen your shop window ad, and raise you Rogers Home Internet.

I wish I had the flyer in front of me, but Rogers recently send out a Home Internet Package for $24.95 advertisement. The flyer lists the speed, and capacity, in large, red letters, along with the print of $24.95, with a nearly microscopic asterisk next to the price. The asterisk in turn was detailed at the bottom of the flyer in similarly tiny font, with the added benefit of being a light grey colour (on shiny white postcard stock background).

The most amusing part was that the explanatory text simply included the words "plus other charges", without explanation.

In other words, the $24.95 selling price was actually $24.95 plus some other, undefined number. Unlike your storefront example, where "up to" , and "exclusions apply" at least permit the possibility of some merchandise meeting the grandiose promise, the Rogers' advert actually is 100% meaningless. Not only can the customer not get the package for $24.95, he/she cannot even calculate what the actual price might even be, as there are no details as to the the "other charges" even are, never mind what they cost.

Still hanging on to my Pebble Time after all this, er, time

I loves me my Pebble. Not because it's a "smartwatch", but because it's a smart "watch".

That is to say, it does everything I expect from a watch - it tells time, it has a stopwatch and countdown timer, and I can set alarms.

That's about it, really. Everything else is gravy.

The "smart" is that it can also get alarms from my phone (calendar, phone, SMS). It's got step counting, and sleep monitoring, which are nice, too.

The thing is, it's a watch first and foremost. I runs for a week on a charge, and it's a 100% watch replacement. So many of the smartwatches I see are more like phones in a watch form factor than a watch.

When my Pebble eventually dies (as any battery-powered piece of tech eventually must), I shall mourn it. But somehow, I shall manage to persevere.

Been there, done that, pulled the stitches

Like the author, I've been doing this for too many years.

Every few years, a new management fad comes along that will "solve everything!", just you wait.

In the mid 1980s, it was CASE tools. When CASE tools didn't pan out, they rushed to Expert Systems. That was followed by Artificial Intelligence, which in turn was superseded by Object Orientation. After that, it was "Software ICs", with Ada being the language to rule them all. Then it was Adaptive Interfaces. Finally, the Internet popped up.

The best way you can tell you're dealing with a fad is that (a) management is busy cooing over how wonderful things are going to be, (b) little if anything developed with the new tech actually works at the moment, and most importantly, (c) the phrase "if you're not doing X, you're going to be out of business in a year".

There are many shops that despite being Ada-free, CASE-free, Expert Systems-free, OO-free, Adaptive Interface-free, and Software IC-free, have managed to keep humming along for 30+ years.

Mind you, some of these fads have some very good ideas behind them, and often there's some solid tech, as well. But there's a vast chasm between "hey, object orientation gives us more code re-use, and allows us to do generate higher quality code" and "object orientation will change the way you do business". Using the tech doesn't mean buying into the often absurd claims associated with it.

Headaches for IT

More positively, when done right, Windows 10 should mean fewer headaches for the IT department.

Yes, it will do so by offloading those headaches from the IT department to the end users.

Generally speaking, IT headaches come when users have the freedom and ability to install software, causing conflicts.

Speaking as a software developer, I love being able to install software. Speaking as an IT person, I hate it when users install crap. The happy medium is users requesting, and IT reviewing, and then approving, software that can be supported.

Unfortunately, usually you either have one extreme or the other - total chaos, as users do whatever they want as IT watches helplessly as the network thrashes itself to death and virus infections are rampant, or total lockdown, where users are so restricted by IT that they can't get anything done, and people have to use their phones to do a Google search because the IT restrictions are so oppressive.

The author seems to think that the latter situation is desirable. I doubt all that many users will agree.

Re: Aaaand that's why I hate MAD.

Back around the time this happened, I worked alongside $AGENCY1 when they discovered some, ahem, "irregularities". The specifics were of the tell-you-have-to-kill-you variety, but they actually were not that important, other than to say it was basically "sooper seekrit $KNOWLEDGE may have been obtained by $ENEMY".

In the investigation by investigation team #1, it was decided that although a theoretical leak of $KNOWLEDGE was bad, even if it had happened, it would not require "treatment" (I loved those cold war euphemisms), because on its' own, $KNOWLEDGE was not "actionable" without $MATERIAL, and $MATERIAL was impossible for $ENEMY to obtain.

A few weeks later, when working alongside $AGENCY2 on an unrelated matter, it was mentioned that things were being clamped down because they'd misplaced some $MATERIAL a while back. There had been an investigation by investigation team #2, who had decided that although the loss of the minimal amount of $MATERIAL was bad, even if it had somehow been obtained by $ENEMY, it would not require "treatment" because on its' own, $MATERIAL was not "actionable" without $KNOWLEDGE, and $KNOWLEDGE was impossible for $ENEMY to obtain.

Basically, $AGENCY1 and $AGENCY2 were both relying on the other one being fail safe, and of course, they didn't talk.

So, I and a few others did the only logical thing: we held a party for members of both investigation teams.

Suffice to say, watching two groups telling the other "but I thought thought YOU were the ones preventing the apocalypse" was a little more exciting than hoped. This was particularly true as the group that was relying on $KNOWLEDGE being unobtainable was discovering $KNOWLEDGE was being openly discussed at the party.

Good times, good times.

In the end, that collaboration resulted in things being found, so all was well, but the entire experience was not reassuring.

One bright note was that we managed to get reimbursed for the cost of the party. This was the only party I'm aware of that has an expense report justification of "WWIII Prevention".

Re: Biometrics

Bit of paper with a full-page photo, folded to the shape of the face that's on it?

Years ago, in Japan, where they sell everything in vending machines, they started selling adult products (porn, sex toys, whatever). Of course, the government required that there be safeguards to prevent underage buyers from obtaining these products.

Of course, they did anyway, and when they pulled the vending machines and looked at the photos of the buyers of all these products, they noticed a staggering number of pop stars, actors, and actresses. Although the facial recognition software was very good at differentiating between a face that was 12 years old and one that was 19, it wasn't good at differentiating between a 32 year old actress and a photograph of a 32 year old actress.

Everything old is new again

Back around 1993 or so, I was given a Psion 3a, which was useful more in theory than in practice. Especially given the $600 price tag. I paid the $100 for the PC bridge, and was glad to have it, but it was definitely a work in progress.

In 1997 or whenever, the Revo series came out, and I got that. It included the PC bridge in the cradle, and was crazy useful. Also, at about $200, it was a lot cheaper. It wasn't until the Pilot (later Palm Pilot, later Palm) came out that I stopped using it, and even so, there were things in the Psion software that were better than Palm (like the spreadsheet).

Two weeks ago, it was announced there would be new Palm devices in 2018, but that they'd likely be running Android. So naturally, a new Psion has to likewise come out, and also run... Android.

Like the new Nokia phones, those aren't comebacks of the old tech companies, or the old innovations that came with them. They're pretty much just companies in a busy market buying a beloved name to slap on their products to try to buy brand loyalty out of nostalgia.

This one, at least, appears to be a bit more of an homage than a simple name grab. I'm not sure that I'd want or need one in this day and age, but bringing back the now defunct form factor at least differentiates it from other products.

Now, with the iPod Nano going defunct, all we need is some 2000-era stand alone MP3 player for joggers who aren't interested in taking a 6" phone slab with them when they exercise...

Activity trackers don't have the stigma that smartwatches do

I see this a lot in the Pebble forums. There's one group of people who want the thing on their wrist to play videos, monitor their heart, track their movements with a GPS, and buy things with NFC. Then there's another group that just wants a watch, dammit.

The intersection set between these two is minimal. This is why the Pebble expats are unhappy with the Fitbit Ionic, because it's overpriced and overkill for a simple watch; and the fitness types aren't impressed that all they really got out of the Pebble IP that Fitbit bought was better battery life and the possibility of an app store.

The problem is that vendors need to choose with group they want to appeal to, and most are trying to satisfy both. Compromises have to be made, and each compromise will please only one group.

As attributed to Abe Lincoln, he didn't know what the secret of success is, but the key to failure is trying to please all of the people all of the time.

New! Improved! Still works!

This reminds me of the kerfuffle back in the late 1980s, when half-height floppy drives first came out. Suddenly, portables (think laptops that weighed over a stone) could have two floppy drives! At the same time!

Still, some fretted. Would there be any problems switching from a full-height drive to a half-height? This was not helped at all when some companies brazenly started advertising that their software worked on systems with the half-height drives. This, of course, got people worried that their competitor's software wouldn't, and it took a while before people realized that a floppy drive was a floppy drive, regardless of how high it was.

So, MS is changing the colour? Well, I've not started a CMD console in years; everything is done within JPSoft's excellent TCC (and freeware TCC/LE) replacements. If an instance of cmd.exe must be run, the tabbed cmder console is much better...