The web page listed my name and email address, so the riskiness of clicking it seemed low, but ALL KLAXONS were going off in my head about this being phishing. I also received another email threatening to suspend my domain if I did verify it.

The original sender is tucows.net? There’s no way a real company would be using such a site to send these emails. After all, that’s some lonely script kiddie in their mom’s basement BS. This had to be phishing.

One last check. I did a dig on verify.domain.com and compare that to the www for the company. Two very different IP spaces, but crucially the nameservers have “dyn” in the name which red flagged that it was one of those dynamic DNS services so it could be anything anywhere. Definitely not legitimate.

So I go to the registrar’s site to report this phishing and look at my domain’s record to see if anything really had changed. It had not, but I noticed there was a phone number I’ve not used since 2003, so I update the record. There is a notice that they need me to verify the information. I go looking for it and see… another copy of the phishing email at the time I updated the record. At this point, I suspect maybe I am completely wrong. Since the risk seems low, I do click the link and verify button and go back to the registrar’s site to see if the warning about needing to verify my information cleared. It did. Dammit!

The “just” part is the hard part. Willpower depletion may not be real (another), but it certainly seems so for myself. Recently when stressed, I made lots and lots of bad decisions earlier in the day than normal.

Chaining: Add it as a step to an existing routine. For me, after work, I go home, change, and go to the gym. The other day, I picked up something from the store and other things to the point the gym was near closing by the time I realized I had not gone. This also makes it a struggle to go on weekends, which I maybe manage one of the two, but the weather has been so nice I sometimes go for an hour walk instead.

Precommitment: Making the decision in the moment could go either way. The intentions are better way before the moment. I lay out the gym clothes on the bed prior to going to work, so I’d feel compelled to move them to not following through. (I really do not like the partner example in the post because a friend has flakey partners who often fail to show.)

Reward yourself: For me it some bread in the post-workout meal. If I go, then I allow myself to eat some bread. If I fail to go, then I don’t allow myself any.

Reduced barriers to training: This is just looking at when I do not go, figure out why and change the environment such that I am more likely to go. And even look at occasions when I almost did not go.

The story I tell about how I ended up working in information technology is about having a computer all my life as a child, breaking it, and most importantly knowing that I had to fix it before my parents found that I had. The typical takeaway is that I was intelligent, talented, etc. But, really that reveals the wrong assumption.

More correct takeaway is by this point in computer history, people designed computers to be fixed. The above quote suggests radios were initially custom built, which made them expensive to fix. To accomplish mass production, modular components make it easier to assemble but also as a side benefit easy to swap failed parts. Computers followed the same path but not only on the hardware side but the also software. Modularity to software is how we can patch, install new software, change settings, etc to fix issues.

Even today, I see people look appalled that smartphones can be successfully sold without an easy way for the owner to replace the battery or a microSD slot to add storage. We like to be able to fix our stuff. Maybe it is our Do-It-Yourself cultural biases at play.

Making things fixable lowers the bar to tinker with it. Tinkerability makes something more accessible to learn where, when, how, why it behaves the way it does. Those experiences in turn make a user self-taught into a power user and eventually into a computer administrator who really is just a power user given the keys to offlimits parts.

Making good predictions isn’t just about your accuracy; it’s also about your calibration.

Accuracy = how often correct
Calibration = confidence level in the prediction

All too often when we see predictions no one asks about the calibration. Nor do we go back and check the accuracy. I know I am weird in the sense that when I make predictions for work such as a file system is going to run out of space in x days, I often place my level of confidence in that prediction. (It might happen sooner, but it could also happen much later.)

The greater our expertise, the worse our overconfidence level tends to bite us. Knowing a lot hurts us. We tend to minimize the space of allowable error in our thinking when our confidence is too high. Even a worrier like myself falls victim often enough. Here is an example overconfidence test.

The paper, by psychologists Joyce Ehrlinger, Ainsley Mitchum and Carol Dweck, reports three studies in which participants were asked to estimate their own performance on a task, either a multiple choice test with antonym problems or a multiple choice general knowledge quiz. The participants were asked to estimate their percentile relative to other students completing the task, from zero percent (worse than all other students) to 100 percent (better than all other students). If participants were perfectly calibrated, then the average percentile estimate should have been 50 percent. But that’s not what they found. In Study 1, for instance, the average was 66 percent. Like the children of Lake Wobegon, participants (on average) believed themselves to be better than average.

Thinking back about many of my work predictions? They were probably way to high. Something that was essentially a super wild ass guess (technical term is SWAG) may have been reported as 70-80% confident. It was informed by metrics and a trend, but there was no reason to think that trend would continue.

I use the stats for my resolutions to decide how many books to do in any given year and especially picked 25,000 for this year because of the low page count. The app is how I ever remember to add what I am reading and finished. A couple books have come my way through the Giveaways. And I have tons of shelves.

A friend’s bad taste in books has made me skip over a few. While I am connected to several past dates, I doubt “I saw you read <some great book> too from your Goodreads” profile would fly very well. Though, OKCupid does have a section on favorite books. The Goodreads compare tool would probably work better.

And this is cute…

Add the book from your Currently Reading shelf to your email signature.

THE book? I only get down close to that in the few days prior to New Year’s Eve as I try to go into the next year new.

Automation is here. When I was in college, for most of my time there, I thought Industrial-Organizational Psychology was my career path. It was not until my last semester when I started working as a student for IT that it all changed. Little did I know that people efficiency was dying. Automation, aka the efficiency of technology (computers and robots), is where industry is making its money. Or that really the anger boistering both the Republican and Democratic presidential front-runners is due to automation. People are frustrated with the economy and how it affects them.

Edward Elser explains why:

Wages no longer follow productivity.

A whole lot of people are unhappy. Some people blame the soulless corporation. Others blame workers who want $15 an hour for a marginal job. But, here is the deal. Here is where I think the root of the discontent is: the system does not need unskilled labor anymore. Industry simply does not need unskilled or marginalized workers.

It used to be when productivity went up, so did wages. But somewhere in the 1970s a disconnect happened. The value of labor stopped following productivity. And it is my opinion this trend is not going to change; it is going to accelerate..

4. After bleakly assessing public opinion, The Myth of the Rational Voter argues that democracies normally deliver substantially better policies than the public wants. The political system tends to quench the public’s anti-market and anti-foreign urges while substantially watering down the policy poison. In 2016, one of the main dilution mechanisms has badly failed: Using social pressure to check and exclude hard-line demagogues.

One thing I have seen quite a bit of in the past few months is people asking those who disagree with them to unfriend them on Facebook. That is not just directed at one specific campaign but supporters of Clinton, Sanders, Rubio, Cruz, and Trump. Oddly enough, I don’t think I’ve seen it directed at Kaisch. (No, I’m not going to fix that by taking that stance.)