Author: Lauren

With so much criticism lately being directed at the more “unsavory” content on YouTube that I’ve discussed previously, it might be easy to lose track of why I’m still one of YouTube’s biggest fans.

Anyone could be forgiven for forgetting that despite highly offensive or even dangerous videos on YouTube that can attract millions of views and understandable public scrutiny, there are many other types of YT videos that attract much less attention but collectively do an incalculably large amount of good.

I’m not referring here to “formal” education videos — though these are also present in tremendous numbers and are usually very welcome indeed. Nor am I just now discussing product installation and similar videos often posted by commercial firms — though these are also often genuinely useful.

Rather, today I’d like to highlight the wonders of “informal” YT videos that walk viewers through the “how-to” or other explanatory steps regarding pretty much any possible topic involving computers, electronics, plumbing, automotive, homemaking, hobbies, sports — seemingly almost everything under the sun.

These videos are typically created by a cast and crew of one individual, often without any formal on-screen titles, background music, or other “fancy” production values.

It’s not uncommon to never see the faces of these videos’ creators. Often you’ll just see their hands at a table or workbench — and hear their informal voice narration — as they proceed through the learning steps of whatever topic that they wish to share.

These videos tend with remarkable frequency to begin with the creator saying “Hi guys!” or “Hey guys!” — and often when you find them they’ll only have accumulated a few thousand views or even fewer.

I’ve been helped by videos like these innumerable times over the years, likely saving me thousands of dollars and vast numbers of wasted hours — permitting me to accomplish by myself projects that otherwise would have been expensive to have done by others, and helping me to avoid costly repair mistakes as well.

To my mind, these kinds of “how-to” creators and their videos aren’t just among the best parts of YouTube, but they’re also shining stars that represent much of what we many years ago had hoped the Internet would grow into being.

These videos are the result of individuals simply wanting to share knowledge to help other people. These creators aren’t looking for fame or recognition — typically their videos aren’t even monetized.

These “how-to” video makers are among the very best not only of YouTube and of the Internet — but of humanity in general as well. The urge to help others is among our species’ most admirable traits — something to keep in mind when the toxic wasteland of Internet abuses, racism, politicians, sociopathic presidents — and all the rest — really start to get you down.

As I’ve frequently noted, one of the reasons that it can be difficult to convince users to provide their phone numbers for account recovery and/or 2-step, multiple-factor authentication/verification login systems, is that many persons fear that the firms involved will abuse those numbers for other purposes.

In the case of Google, I’ve emphasized that their excellent privacy practices and related internal controls (Google’s privacy team is world class), make any such concerns utterly unwarranted.

Such is obviously not the case with Facebook. They’ve now admitted that a “bug” caused mobile numbers provided by users for multiple-factor verification to also be used for spamming those users with unrelated text messages. Even worse, when users replied to those texts their replies frequently ended up being posted on their own Facebook feeds! Ouch.

What’s most revealing here is what this situation suggests about Facebook’s own internal privacy practices. Proper proactive privacy design would have compartmentalized those phone numbers and associated data in a manner that would have prevented a “bug” like this from ever triggering such abuse of those numbers.

Facebook’s sloppiness in this regard has now been exposed to the entire world.

And naturally this raises a much more general concern.

What other sorts of systemic privacy design failures are buried in Facebook’s code, waiting for other “bugs” capable of freeing them to harass innocent Facebook users yet again?

These are all more illustrations of why I don’t use Facebook. If you still do, I recommend continuous diligence regarding your privacy on that platform — and lotsa luck — you’re going to need it!

UPDATE (February 16, 2018):The FBI is reporting today that on January 5th of this year, they received a tip from an individual close to the shooter, specifically noting concerns about his guns and a possible school shooting. In sharp contrast to the single unverifiable YouTube comment discussed below that had been reported to the FBI, the very specific information apparently provided in the January tip is precisely the kind of data that should have triggered a full-blown FBI investigation. Since the information from this January tip reportedly was never acted upon, this dramatically increases FBI culpability in this case.

– – –

Before the blood had even dried in the classrooms of the Florida high school that was the venue for yet another mass shooting tragedy, authorities and politicians were out in force trying to assign blame everywhere.

That is, everywhere except for the fact that a youth too young to legally buy a handgun was able to legally buy an AR-15 assault-style weapon that he used to conduct his massacre.

Much of the misplaced blame this time is being lobbed at social media. The shooter, whom we now know had mental health problems but apparently had never been adjudicated as mentally ill, had a fairly rich social media presence, so the talking heads are blaming firms like YouTube and agencies like the FBI for not “connecting the dots” to prevent this attack.

But the reality is that (as far as I can tell at this point) there wasn’t anything particularly remarkable about his social media history in today’s Internet environment.

There was — sad to say — nothing notable to differentiate his online activities from vast numbers of other profiles, posts, and comments that feature guns, knives, and provocatively “violent” types of statements. This is the state of the Net today — flooded with such content. When I block trolls on Google+, I usually first take a quick survey of their profiles. I’d say that at least 50% of the time they fall into the kinds of categories I’ve mentioned above.

We also know that 99+% of these kinds of users are not actually going to commit violent acts against people or property.

20/20 hindsight is great, but by definition it doesn’t have any predictive value in situations like this. Law enforcement couldn’t possibly have the resources to investigate every such posting.

In the case of this shooter, the FBI actually became involved since a YouTube user had expressed concern when a comment was left by someone (using the name of the shooter) saying “I’m going to be a professional school shooter.”

That’s not even an explicit threat. There’s no specified time or place. It’s very nasty, but not illegal to say. Social media is replete with far more explicit and scary statements that would be much more difficult to categorize as likely sarcasm or darkly joking around.

The FBI reportedly did a routine records search on that name (of course, anyone can post pretty much anything under any name), and found nothing relevant. To have expended more resources based only on that single comment didn’t make sense. Nor is there apparently any reason to believe that if they’d located that individual, then gone out and immediately interviewed him, that the course of later events would have been significantly changed.

We’re also hearing the refrain that authorities should have the right to haul in anyone reported to have mental stability issues of any kind, even if they’ve never been treated for mental illness or been arrested for any crime.

Well golly, these days that would probably include about four-fifths of the population, if not more. Pretty much everyone is nuts these days in our toxic social and political environments, one way or another.

The world is full of loonies, but these kinds of attacks only happen routinely here in the U.S. — and we all know in our hearts that the trivial availability of powerful firearms is the single relevant differentiating factor that separates us from the rest of the civilized world in this respect.

Google has announced the bringing of its “AMP” concept (an acronym for “Accelerated Mobile Pages”) to Gmail, and is encouraging other email providers to follow suit.

AMP in the mobile space has been highly controversial since the word go, mainly due to the increased power and leverage that it gives Google over the display of websites and ads.

The incorporation of AMP concepts into email, to provide what Google is calling “a more interactive and engaging” email experience, is nothing short of awful. It seriously sucks. It sucks so much that it takes your breath away.

I am not in this post interested in how or by how much AMPed email would push additional market power to Google. That’s not my area of expertise and I’ll largely defer to others’ analyses in these regards.

But I do know email technology. I’ve been coding email systems and using email for a very long time — longer than I really like to think about. I was involved in the creation of various foundational email standards on which all of today’s Internet email systems are based, and I have a pretty good feel for where things have gone wrong with email during ensuing decades.

Introduction of “rich” email formats — in particular HTML email with its pretty fonts, animated icons, and wide array of extraneous adornments — can be reasonably viewed as a key class of “innovations” that led directly to the modern scourge of spam, phishing attacks, and a wide variety of other email-delivered criminal payloads that routinely ruin innumerable innocent lives.

Due to the wide variety of damage that can be done through unscrupulous use of these email formats, many sites actually ban and/or quarantine all inbound HTML email that doesn’t also include “plain text” versions of the messages as well.

In fact, the actual underlying email specifications require such a plain text version to accompany any HTML version. Unfortunately, this requirement is now frequently ignored, both by crooks who use its absence to try trick email users into clicking through to their malignant sites, and by “honest” email senders who just don’t give a damn about standards and only care about getting their bloated messages through one way or another.

This state of affairs has led many site administrators to consider inbound HTML-only email to be a 100% signal of likely spam. Much actually legit email is thrown into the trash unseen as a result.

Google now plans to be pushing what amounts to HTML email on steroids, creating a new email “part” that will likely quickly become the darling of the same email marketers — further bloating email, wasting data, and causing both more confusion for users and more opportunities for virulent email crooks.

No doubt Google has considered the negative ramifications of this project, and obviously has decided to plow ahead anyway, especially given the rapidly growing challenges of the traditional website ad-based ecosystem.

I frequently am asked by users how they can actively avoid the tricky garbage that arrives in their email every day. I have never in my life heard anyone say anything like, “Golly, I sure wish that I could receive much more complicated email that would let me do all sorts of stuff from inside the email itself!”

And I’ll wager that you’ve never heard anyone asking for “more interactive and engaging” email. Most people want simple, straightforward email, keeping the more complex operations on individual websites that aren’t “cross-contaminated” into important email messages.

AMP for email is a quintessential “solution in search of a problem” — a system being driven by corporate needs, not by the needs of ordinary users.

Worse yet, if email marketers begin to widely use this system, it will ultimately negatively impact every email user on the Net, with ever more unnecessarily bloated messages clogging up inboxes even if you have no intention of ever touching the “AMPed” portion of those messages.

And I predict that despite what will surely be the best efforts of Google to avoid abuse, the email criminals will find ways to exploit this technology, leading to an ever escalating whack-a-mole war.

Throwing everything except the kitchen sink into HTML email was always a bad idea. But now Google apparently wants to throw in that sink as well. And frankly, this could be the final straw that sinks much of email’s usefulness for us all.

We’re losing the account security war. Despite the increased availability of 2-step verification (2sv) systems — also called 2-factor and multiple-factor verification/authentication — most people don’t use them. As a result, conventional phishing techniques continue to be largely effective at stealing user account credentials, ruining many lives in the process.

As I’ve discussed previously, part of the reason for this low uptake of 2sv relates to the design of the systems themselves — they frankly remain too complicated in terms of “hassle level” for most users to be willing to bother with.

They don’t really understand them, even when many options are provided. They’re afraid they’ll screw up and get locked out of their accounts. They don’t want to hand over their phone numbers. They don’t trust where the verification phone calls are coming from when they see them on Caller ID — sometimes even reporting those calls as spam on public websites! They don’t know how to use 2sv with third-party apps. Often they tried to use 2sv, got confused, and gave up. It goes on and on. We’ve discussed this all before.

And to be sure, many 2sv implementations simply suck. Frequently they’re badly designed, break down easily, are a pain in the ass to use, and sometimes do lock you out.

Even for Google, which has one of the best 2sv systems that I know of (see their 2sv setup site at: https://www.google.com/landing/2step), user acceptance of 2sv is dismal — Google reports that fewer than 1 in 10 Gmail users have 2sv enabled.

And so the phishing continues. Recently there have been reports of new Russian hacking attacks against Defense Department users’ Gmail accounts (mostly their personal accounts, but that’s bad enough given the leverage that personal info found in such accounts might provide to adversaries).

In corporate environments it’s possible to require use of 2sv. But outside of those environments, this is a very tricky proposition. I’ve noted the theoretical desirability of requiring 2sv for everyone — but I also acknowledge that as a practical matter, given current systems and sensibilities, this is almost certainly a non-starter for now.

Too many users would object, and unlike some government entities (e.g. the Social Security Administration and IRS) that now require 2sv to access their sites and always offer alternative offline mechanisms (e.g., phone, conventional mail) for dealing with them, any major Web firm that tried to require 2sv would be likely to find itself at a competitive disadvantage in short order.

But there’s an even more fundamental problem. Most users simply don’t believe that they’re ever going to hacked. It always “happens to somebody else” — not to me! Using 2sv just feels like too much hassle for most people under such conditions, though after they or someone close to them have been hacked, they frequently change their tune on this quite quickly — but by then the damage is done.

It’s time to face the facts. Trying to “scare” users into adopting 2sv has been an utter failure.

Maybe we need to consider another approach — the carrot rather than the stick.

What can we do to make 2sv usage desirable, cool, even fun?

In other words, if we can’t successfully convince users to enable 2sv based on their own security self-interests, even in the face of nightmarish hacking stories, perhaps we can “bribe” them into the pantheon of 2sv.

There are precedents for this kind of approach.

For example, Google in the past has offered a bonus of additional free disk space allocations for users who completed specified security checkups.

Could we convince users to enable 2sv (and keep it enabled for at least reasonable periods of time) through similar incentives?

How about a buck or two of Play Store or other app store credits?

Can we make this more of a game, a kind of contest? Why not provide users with incentives not only to enable 2sv themselves, but to help convince other users to do so?

Obviously the devil is in the details, and any such incentive programs, rewards, or account bonuses would need to be carefully designed to avoid abuse.

But I increasingly believe that we need to explore new account security paradigms, especially when it comes to convincing users to enable 2sv.

The status quo is utterly unacceptable. If “bribing” users to enable better security on their accounts could make a positive difference, then let’s bring on the bribes!