Was it a hack? The answer to this question is tricky. Yes, it was a hack, but not in the way most people understand them. It was a DDos DNS attack, and if you don’t know what that means then I don’t blame you. It’s two different acronyms mashed together. Two different acronyms of technical jargon that – even if you knew what they meant – wouldn’t really give you a an understanding of what this problem was or what caused it.

CNN’s expert did what he could could answer the question, but the segment was barely a minute long and that’s not enough time to really understand this problem. Sorry. Sometimes if you ask complicated questions you get complicated answers that don’t fit inside of handy soundbites. But if you want to learn what this attack was and who got hacked, then here’s a simple explanation in layperson’s terms.

What is a DNS DDos?

Just like every active telephone has a phone number associated with it, every computer system on the internet has a number. These numbers are how computers identify each other. This number is called an IP address.

Way back in the early days of the internet – and I mean REALLY early, like before it was even called the internet – that was all we had. If you wanted to connect with a particular computer, you had to know what its IP address was. And that’s fine. If there are only a dozen computers connected you can just memorize them. But once you’ve got dozens or hundreds of servers, then you need something better.

So we invented the domain name. Instead of memorizing this arbitrary number, you just type Facebook.com. Or Reddit.com. Or shamusyoung.com. Whatever.

Then I hear you say, “Hang on a minute Shamus. There are literally billions of sites on the internet. My little computer can’t possibly know the IP address for every one of them.”

Right you are, it can’t. For that we need a special kind of server called a domain name server, or DNS. This is a computer that helps your computer get those all-important numbers of the sites you’re trying to visit. So you type reddit.com, your computer asks the DNS for the IP address of Reddit.com. Then the DNS replies. Then your computer can contact Reddit. It happens in the blink of an eye, so most people aren’t even aware this is going on.

If you’re old enough to remember the days when you’d use your rotary telephone to call for directory assistance, and then you’d give the operator a name and she’d give you the phone number of the person you were looking for, then this is the same idea. If you’re too young to remember it in those terms, then think of it like I dunno, space magic or something.

So what happens is you type facebook.com into your web browser, and then your computer sends a message to your DNS and asks for the number for Facebook. The DNS replies with the proper IP address. Then your computer contacts the Facebook server and you can check on all those friend requests and birthday announcements and cat pictures you’ve got waiting for you.

So now you’re thinking, ah! So the DNS got hacked, I get it.

Except no. The hackers never took control of the DNS. To understand what they did, we need to understand the other half of this attack…

DDos

This was a Distributed Denial Of Service attack. DDos for short. How it works is this:

In any case, this user clicks on something they really shouldn’t. A website offers to give them free money, or help them meet sexy ladies, or to clean all the “viruses” off their computer, and all they have to do is download a little program. Which is, of course, a virus. (Actually it would be better called malware in this context, but we’re trying to keep things simple, so let’s just stick with virus.)

Maybe you’ve had something like this on your computer before. Maybe it buried you in ads, slowed down your computer, deleted your files, or tried to steal your credit card information. But the kind of virus we’re talking about today is a little different. Once installed, it doesn’t seem to do anything. You’ll probably have no idea you’ve messed up. Your computer works exactly like before. Assuming the author of the virus did their job, there won’t be any suspicious windows. No slowdowns. No scary warnings.

Instead, the hacked computer is now quietly, in the background, communicating with a server run by a hacker. The computer will occasionally ask the server, “Hey boss, you want me to do anything?” and the server will usually reply, “Nope. You’re fine.”

But sometimes the server will reply with an order like, “I want you to go and overwhelm Facebook.com”. And so the computer will begin pelt Facebook with meaningless requests. It doesn’t even care about the answer. The only goal is to keep Facebook busy so it can’t serve anyone else. It’s like calling someone over and over again and then hanging up, just so that if anyone else tries to call they just get a busy signal.

Next I hear you say, “But Shamus. What can a single personal computer do to a mighty website like Facebook?”

And you’re right. One single hacked computer is harmless. But what if there were thousands of them, all under the command of a single hacker? If the hacker tells all of the computers to flood one website at the same time, then they might, through their combined efforts, be enough to overwhelm it.

This network of hacked computers is called a Botnet, and one of its hacked computers is called a bot.

So to sum up: A hacker tricks thousands of people into downloading malicious software, which turns their computers into mindless slaves that combine to form a botnet, and then the hacker orders them all to overwhelm a single server to make the server unable to operate normally.

But!

Some sites are too big to be attacked directly in this way. Things like Facebook and Google and Reddit have massive server infrastructure that can shrug off the typical botnet and keep going. Which brings us back to the kind of attack we saw last week: A DNS DDos attack.

Instead of attacking a giant like Amazon.com or Google, the hacker can have the bots attack that DNS we talked about earlier. Those machines aren’t generally equipped to deal with enormous traffic loads because their job, while important, is pretty lightweight.

So if you were one of the millions of people affected by this hack, then YOUR computer was working fine, and FACEBOOK was fine, but your computer could no longer reach the DNS to find out how to reach Facebook.

So to return to the original question:

“Was it a hack?”

You weren’t hacked. The website you were trying to reach wasn’t hacked. The DNS wasn’t hacked, although it was attacked by flooding it with traffic. The people who were hacked were the tens of thousands of clueless users who failed to properly secure their computers. And these people probably have no idea they’re the source of the problem, even though their compromised machines pose a threat to the security and stability of the internet.

So now you know what a DNS DDos attack is. How do we fight them and how do we protect against them? That’s a video for another time.

True, but he wasn’t using it in that context, he was using it to explain why Net Neutrality was a horrible idea. I seem to recall that the problem at the time was that, as the head of the regulatory committee that would ultimately deliver legislation on Net Neutrality, some felt his level of abstraction was insufficient at re-assuring those interested in Net Neutrality that the people who’d decide the matter knew exactly what they were deciding.

I don’t recall the context of the ‘series of tubes’ remark, so I’ll take your word for it. I will say that we often see similar responses by certain segments of the tech community to lines of questioning at the US Supreme Court, even though those questions too are abstractions, or attempts to get petitioners to clarify or further think out a line of reasoning or argument, and aren’t necessarily indicative of a lack of technical knowledge.

Actually the current spate of DDoSs aren’t primarily comprised of standard PCs, it’s mostly “Internet of Things” (IoT) trash – unsecured devices like DVRs, fridges, etc. needlessly connected to the internet that are trivially easy to compromise. A lot of them in this instance appear to be security cameras and webcams. The increasing prevalence of internet-connected devices means this sort of thing is going to become a very frequent occurrence. It also means you should get used to hearing about stuff like cars being remotely hijacked.

What possible use could be had for an internet enabled fridge? It’s a box you put things in that you want cold… If you’re really ambitious it’s also an ice and water dispenser. Is there some temperature control that absolutely must be accessible from my computer room for some asinine reason?

I believe Samsung has a fridge that has an internal camera, so from your phone, you can look inside your fridge. This is useful if you are, say, at the grocery store and can’t remember if you’re out of yogurt, for example.

I think the security hole part of that thought is completely invisible to most vendors and potential customers.

I mean: “smart” lightbulbs, really?

For some reason lots of marginally-informed people believe that privacy is no longer relevant and all their data is rightful property of some large company, and also we know that properly encrypted data and web traffic is a bad thing, because how would else the police know that you’re not a terrorist?
somewhat less sarcastically: Companies who make these gedgets have no budget for security stuff, because if they spend time and money on it, someone else will beat them with their less secure but cheaper and faster-developed product, so they don’t. Especially since properly securing your device and your users’ data means you’ll probably be sued by the CIA at some point (see Apple).

Ergo they’re all vulnerable, and the business model behind them is always to collect lots of data for the company who pretended to sell them to you, until that company is sold to some much larger other company. That’s a recipe for desaster, both for customers whose data will be stolen and whose devices get hacked and the victims of whatever attack the hacked devices are then used for. We’re just seeing the beginning of this. Researchers who inspected the malware used for this particular botnet agreed that it was not made by professionals. So just wait what professionals can do once we have a hundred or thousand times as many connected “things” around …

I believe it was Bill Gates back in his MS days who first prominently proposed and publicly pushed for a fridge that could automatically order any food items it was low on to be replenished as part of a ‘smart house’ design.

You could also use that to remotely check what you still have in there from the shop via your phone.

(Both would require a barcode scanner to be part of the fridge, of course).

Indeed. It’s a feature I wouldn’t want even if it worked by magical pixie dust with no added power consumption or security risk. It’s an example of a machine trying to do what you want instead of what you tell it to do. Why is this bad? You can get better at commanding a given machine properly so that result is more likely to correlate strongly with desire. The machine is unlikely to get better at this over time assuming it doesn’t get WORSE.

I’d like a fridge that kept an inventory and linked it with the tools I use to purchase food. Given the capability to do that, a host of other things becomes possible and features that other people want become easy.

Tablets have kind of superceded at lot of this “smart” stuff. Why would you spend hundreds of dollars more on a fridge with a screen on the front when you could attach a $50 tablet to the front that can do a lot more than just order groceries?

The machine wont get better,that is true.But the ones that follow it will.Also,you may not see worth in it,but others do.Theres a good example of a somewhat recent technology that introduced new security problems for the sake of much convenience:Credit cards.First credit cards were rather easy to exploit,and offered little benefit when compared to paper money.But these days,while paper money is still more secure,the risk of using a credit card has decreased while the benefits have increased substantially.

Same can be applied to practically all technological innovation.First cars were expensive and slow,offering no benefit over horses.First planes were barely able to stay in the air.Etc.

You aren’t secure. You never were, and you never will be. That’s the simple fact we have to live with. As for this idea that the machine gets better with time…. Like I mentioned hardly a guarantee. Just look at recent windows operating systems.

Um,the very video that you linked shows otherwise.A machine does it now instead of a person on the phone.Yes,its not a HUGE improvement,but it is an improvement.Also,its not the only form of security.Copying the data from your credit card without your knowledge is slightly harder now than before.

You aren't secure. You never were, and you never will be.

Of course not.Perfect security doesnt exist.What does exist is convenience.How much convenience do you get from it versus how inconvenient it is for someone to exploit it.For credit cards that balance has long since tipped in favor of using them.

As for this idea that the machine gets better with time…. Like I mentioned hardly a guarantee. Just look at recent windows operating systems.

The only thing the machine doing it does is make the process faster. There’s no security improvement there. None. As a matter of fact that video was a part of a longer episode on how security is an illusion and that there exist websites where credit card numbers are traded like hacker currency.

Showing an example of technological iteration resulting in tangibly worse product as a demonstration that the machine does not necessarily get better at meeting your desires is faulty? How so? Do we need to get into instances of history wherein software patches critically break things, add layers of inconvenience, or are otherwise counter to the desires of the people using the thing?

I think Amazon’s dash button solution for that is pretty neat. Its a little dongle you can hang near wherever you keep a particular item, like laundry detergent or whatever, and when you click the button, it orders that item for you.

I can think of many uses for internet-enabled fridge. The fridge has LONG been a place to post stuff and inventory foods. Open the last stick of butter? Shopping list managed on a panel on the door that feeds a corresponding phone app, that might include a barcode reader for further convenience. When it’s not being used for inventory, it could display a calendar with everybody’s things on it, or kids’ digital drawings, or snapshots, all the usual crap that was previously on paper. Why do fridge magnets exist? To hold up the paper. But they don’t stick to fashionable stainless steel anyway. So put posted stuff on a 10-12″ panel built into the door. Give the panel a battery backup and it can send you “Hey, power is out/back on” notifications.

The main problem is not Linux, it’s that it’s general-purpose hardware pretending to be non-general-purpose hardware to the user, but actually it can do all of the things your PC can, if (usually) on a smaller scale.
But people don’t see them as that, and that makes it dangerous. Nobody considers upgrading their friggin’ lightbulb firmware. I mean, most don’t even update their DSL routers, even if there are upgrades available.
(not to self: check for router upgrades later…)
And it’s not as if vendors were encouraging anyone to view “smart devices” as computers. That perspective is only available to the people who make those things, and the people who hack them… that’s the actual problem.

These things would not be any safer if they ran on Windows, MacOS, BSD, OS/2 or whatever else there is… the only way they’d be safer is if a) people (oncluding vendors) cared more about their safety, and if they weren’t made so much out of general-purpose hardware but used chips with precisely the required abilities. But those would need to be custom made and thus more expensive, despite the reduced functionality, so probably not going to happen.

The infosec writers I’ve been reading these past few weeks are really concerned about IoT security. The biggest issue seems to be neither the seller nor the buyer has much incentive to care about the security of their IoT device. Devices sell (and attract venture capital/investment) based on features, not security, and most buyers don’t care about hidden malware on their device if it otherwise works as advertised.

So write a payload and distribute it to all these IoT devices. Your program should kill any new processes it sees on the machine after it starts. That will prevent new infections. To kill the old ones, you just need to force a reboot, and then run your process-killer.

I’m sure those details are wrong, but what’s to prevent a white hat hacker from neutralizing all of these devices?

The basic idea of hacking in a security patch is theoretically sound, but there are logistical problems that prevent it from broadly fixing the IOT. 1: The black hats are just as capable of patching the device so that it doesn’t listen to anyone but them. 2: There’s no universal vulnerability in the IOT, for every distinct product line, you would have to find its particular exploits, and as the Garrett article points out, there are 30,000 brands of camera alone.

30,000 camera brands representing a handful of actually different models. And not all of them are updatable in the first place. Why not? Because writing the crap to do updating takes time and effort and the vast majority aren’t ever going to get updated ANYWAY because the owners will never bother to check for and do them. So why spend the money on building the infrastructure when you can shave a buck or two off the shelf price of the unit by NOT doing it?

That doesn’t even hinder the exploitation of the device either. Just write the exploit to do its thing until the device is rebooted, then re-exploit after that happens. It’s TRIVIAL if part of the exploit payload reports back to the command-and-control network that it’s been exploited, because then that system (which is significantly smarter) can just notice that the camera hasn’t checked in recently, and add it to the top priority of places to scan to exploit cameras.

Except the source code for the Mirai botnet tool has been released. So whatever they are doing, they must have code for each different type of device they infect. And if these are all some micro Linux on a chip distributed by a single Chinese vendor, it won’t be impossible to target each one. The black hats have already done most of the work.

I said you can’t broadly fix the IOT, not Mirai. You could potentially use the Mirai exploits to force a patch to all Mirai-vulnerable devices, and you’d have “fixed” however many million cameras this used. But the IOT is orders of magnitude larger than some specific Chinese cameras. If you wanted to fix the entire IOT you’d have to find and patch the exploits of five different brands of refrigerators, ten brands of thermostats, twenty brands of printers, a hundred brands of cameras… and that’s assuming a fix is even possible. At least a couple of these devices are going to be so thoroughly unsecure that there’s no way to patch them without breaking the default way that the manufacturer intended for it to communicate, and at that point you’re essentially destroying the IOT to save it.

And as soon as benevolently patching the IOT became common practice, botnet-builders would start patching any device they compromised so that no one else could update it. You could only prevent an IOT botnet if you caught every large exploit before a malicious hacker found any.

How do these devices get compromised, anyway? It’s not like they have an exposed IP address, so the device itself would have to initiate connection to the source of the malicious software. Some sort of MitM attack?

I mean, in my work I’ve seen plenty of commercial systems that are publicly accessible, and with the right understanding of them you could definitely compromise the ones that use insecure credentials, but a) it requires a lot of specialized knowledge to do anything with that, since they’re embedded systems, and b) that’s kinda not-really “internet of things” systems.

For that matter, how many people really use “things”? I always assumed these were just devices for people who buy iPhones the day they come out…

How does that work, when your average user isn’t going to buy a static IP from their ISP?

My gut tells me that most of these devices must register themselves on a server somewhere on the cloud with IP and port-forward info so the end-user’s apps can find it. But unless that server’s been compromised (possible, I guess) or the hacker is getting between the server and the IoT device, how do they find that IP/port forward info? Continuously scan all the ports at all the addresses in the world?

non-static IP addresses don’t change that much. And usually there’s a central server that have a live list of all the active cameras for owners to use to connect to their cameras from outside for this very reason, which can ALSO be poorly secured. I recall reading about one such didn’t limit what cameras could be reached once signed on. It was possible to get access to ANY of the cameras, with any valid credentials. Even without the bot-net aspect, that’s incredibly negligent — Parents installing cameras in their kids bedrooms to check that Dave wasn’t doing anything nasty in his bedroom means that anybody on the planet could be watching Dave doing something nasty in his bedroom.

How does that work, when your average user isn't going to buy a static IP from their ISP?

Whether an IP is static or not is irrelevant. Any device connected to the internet is being port scanned within minutes. Devices will be constantly peppered with bruteforce SSH login attempts and various probes for other vulnerabilities.

It is extremely sobering when looking at access and firewall logs. Most people have no idea how devices connected to the internet are in a state of perpetual sustained assault.

Most users are not network savvy enough to set up port forwarding on a router to do this manually, so automatic steps handle arranging the process of opening a port so the device is internet accessible (using protocols built into many consumer grade routers such as UPnP. Ease of use is a bigger selling point than security, and many companies hard-code credentials into the firmware–in ways that even a knowledgeable administrator physically cannot change (lots of fingers pointed at several Chinese companies for this practice, but others do the same). The Mirai code now released has a lot of those default credentials and the standard ports they appear on hard coded. Researchers who set up honeypots say that it takes less than 10 minutes on average for these devices to be completely compromised after they are turned on.

The only thing to do if you have devices like this in your home network is disable UPnP. You can set up manual port forwarding*, but even then the hard coded credentials are still there (just on different ports). Disabling UPnP AND port forwarding to the device and. . .you paid for an internet-enabled device that you have prevented from using the internet. Any features relying on connectivity are unusable.

Insane absolutely ludicrous out there question but why couldn’t browsers be tuned to treat ip addresses the way we treat phone numbers and remember the ones you put in speeddial or your home page tabs or bookmarks

The bigger sites use multiple IP addresses, and IP addresses themselves can change.

Plus there’s the tech support issue. Have it as an option in settings, then someone needs to check that when trying to diagnose the computer. DNS servers only very rarely go down, while the problems from storing IP addresses would be much more common.

The last time one of these things happened, I remember grabbing a list of IP addresses someone posted for staple sites like Wikipedia, Reddit, and Facebook. Testing a couple of them now, they’re useless again.

Some of the load balancing for big sites is done through hardware – though there may be hundreds of servers servicing requests, they’re all answering on a special high-end router which has a single IP address and spreads the requests it receives among the servers more or less evenly. It used to be that some companies would have several servers with different IP addresses, and the DNS would have all those addresses listed under the same domain name, which would then give a random response from that list to each request. (Some even exposed this, giving each an address like ww1.example.com vs ww2.example.com.) This also load balances, but then each individual user will continue to contact the same server.

Side note: The DNS system works downward as well. The authoritative name servers don’t usually handle all the details, the IP address that the .com DNS server gives you for http://www.example.com is actually the example.com’s local DNS server, which handles http://www.example.com vs mail.example.com vs irc.example.com. This means that the authoritative name servers don’t need to be updated very often, and companies can adjust the IP addresses of the other servers they own freely since they control their local DNS.

Also, example.com is the correct example to use, because it has been officially registered as an example and cannot be owned by anyone. Several poor users have been harassed by cheap web classes that pick a random word for an example domain name or email address, and end up directing students towards an unrelated person’s website/email.

Your computer will generally cache all the DNS responses it’s received recently.

If it doesn’t have the answer, it will usually ask your ISP’s DNS server, or a third-party DNS server like Google DNS. Or any other DNS server it’s configured to use. Those servers are ALSO a cache of DNS, but because they get a lot of traffic, they usually have the address you’re looking for.

If THOSE DNS servers don’t know, they’ll ask the authoritative name server, which is the ultimate “source of truth” for a given top level domain. For example, there are a small amount of servers whose job it is to know about all the .com DNS addresses. There are potentially different from the authoritative .net, .gov, .biz, etc. (It’s also part of the reason why new TLD’s like .google and .microsoft drive people nuts, but that’s a different rant).

I’m skipping a few potential layers here – it’s possible a DNS server will ask an upstream server, who asks an upstream server, who asks….you get it. Ultimately, the goal is to ask the authoritative servers as infrequently as possible and cache the results. The idea, ironically, is to prevent just this sort of problem – flooding the authoritative server with too much traffic and bringing the system down.

So, with all this cacheing, you might ask how the system could possibly go down – even if the authoritative server is compromised, we’ve got hundreds of servers cacheing its results.

The issue is an issue all caches have, which is “how long should I trust my stale cached record before I go and ask for an update?” The layout of servers and DNS addresses changes infrequently for any particular site, but it does happen. And there’s always SOME stuff changing.

There are a couple of ways to handle cache invalidation (i.e. deciding my cache is out of date and asking for an update). One method, which is the one the DNS system uses, is a time-based one. Every DNS response comes with what’s known as a “time to live,” (TTL) which is a built-in expiry time. The server tells downstream systems “Trust this response until this time, then ask me again.” The typical TTL for DNS is 24 hours.

Choosing a good TTL is hard. Too short and you get a lot of redundant traffic asking about stuff that hasn’t changed. Too long, and changes don’t propagate quickly – for example, if a company switches ISP’s and has a new DNS entry, they’ll keep getting traffic on the old IP address until everyone’s cache updates, which can be a problem. 24 hours was seen as a reasonable compromise.

So, we’ve got all these layers and layers of cacheing that means very few requests actually hit the central servers. But when the cache expires, it doesn’t matter how many layers exist – they all will tell you “I dunno – I need to go check!” And when the authoritative server can’t answer, you gots problems.

DNS responses are cached, but because the space is limited and domains might migrate IP addresses, they aren’t stored forever. They seem to be cached for three days to a week, going by how long it takes for everyone’s caches to update when a site moves to a new IP.

The AVERAGE might be 24 hours. But that varies WILDLY by service and how long the cache is good for is set by the DNS configuration for the domain name. Webservers for big domains might have a Time To Live of as little as about five minutes. Mail servers might have a week.

It does, and that helps the problem to some extent. The issue there is that sometimes IP addresses change, and for that reason, every time you ask about a name-address mapping, you get an answer and *also* get a Time-To-Live (TTL) for that answer. You can look at what your computer is remembering right now (assuming you’re running windows) with the command “ipconfig /displaydns”, which for this site right now shows:

which shows that there’s 10724 more seconds left before it’ll look up that name again. That’s about 3 hours. I seriously doubt that Shamus is going to change hosting providers before noon today, but if he did, he could safely shut down his old IP shortly thereafter.

Theoretically, the TTL can be anywhere from 0 seconds to about 70 years. Common values are 5 minutes, an hour, 4 hours, one day.

So right now, sites that can be reasonably certain that their IP scheme is very stable can help alleviate this issue by setting a long TTL – days would be fine. ISP’s with well-configured caching nameservers would help, because then so long as your ISP’s nameserver has the name you’re looking for in its cache, you can get it. Of course, if it can’t get a name resolved, then all of its customers are screwed as well for that name.

Another thing that would help is something that’s both more technical and significantly more contentious. Right now, if you make a DNS request, there are 3 general classes of answer you can get: 1) The “right” answer. Here’s the IP, or something similarly useful. 2) NXDOMAIN, meaning “no such domain”. That means I contacted a server that *should* know the answer, and it told me that such a name does not exist. Note that NXDOMAIN *also* has a TTL, meaning “That doesn’t exist, but if you check back in X seconds, maybe it will then”. Finally, there’s 3) SRVFAIL, meaning server failure: I asked all the servers that should know, and none of them ever answered. This is what people were probably getting.

You *could* build a DNS server that caches entries beyond their TTL, and if it ever finds an instance where it would have to respond with a SRVFAIL, but has an expired cached entry, it instead responds with that. There are those that would argue that this violates the relevant RFC’s, and they’re correct. And, there are some cases where it would cause problems. However, there are also cases where it could solve problems, and I think it’s worth looking into how to minimize the former while maximizing the latter.

Can we get lower level infrastructure that notices when part of a DDoS attack is being routed through it and responds appropriately? It seems like hardening routers might make unsubtle attacks harder to complete.

The problem is, it’s been really hard to get ISP’s to implement. The larger an ISP is, the harder it is to do, but the smaller an ISP is, the less resources they have to put towards non-revenue-generating stuff like this. In the end, it seems to scale fairly accurately with ISP size, so that everyone agrees that it’s a great idea that everyone should do, but it’s just a little bit too resource-intensive for they themselves to do.

I both like, and felt a bit disconcerted by the re-post of the script in the post.

After watching the video I started reading, expecting some additional commentary (like what is often posted with Spoiler Warning and Diecast posts). I overlooked the note up top where you mentioned posting the Script, and started to read, it took me a few paragraphs to realize it was a summary of the video or a word for word script (as it was).

This slightly annoyed me as I was expecting more, especially before I found the note on top I over looked.

But I agree it’s nice overall… there are often times I’m not able to watch a video at that time (depending on what’s going on around me in real life) and/or would prefer to just read something. In which case a script (or even a overall summary) is appreciated. Though I wonder if there might be a better way to differentiate it from new/independent discussion. Much like you used quote boxes or different text formatting to denote in-character vs. out-of-character discussion in your Lets plays. Perhaps you could do something similar for Video Transcripts to better differentiate them visually.

Looking back I see it’s something you’ve done since at least the second Reset Button. But it does look like a few of those did have some additional discussion beyond the transcript in the post.

Or perhaps I’m the only one slightly bothered by the lack of a visual cue to call out the text as a transcript…

Whoa! Is this something you’ve done before? I confess, I pretty much skip any and every video and podcast – on any site – for the same reasons others have cited: I can read faster than I can hear people speak; if the topic strays into something I’m not quite as interested in I can skim a bit; etc.

Anyway, thanks for that extra consideration! It’s awesome, and hugely appreciated! :) (I now wonder what other posts I’ve skipped because I didn’t realize there were textual alternatives.)

It’s certainly not the norm although a commenter mentioned this happening for previous Reset Buttons. I suspect that it can and will happen for videos like those where the script is written then read and recorded. For unscripted content like Spoiler Warning, it would be too painful to go back and transcribe.

I love Reset Button! Thanks for the video Shamus. Too bad I didn’t catch that poll earlier. I’ve been wondering, after the World of Warcraft content runs dry, are you going to try making new MMORPG comic content? I already read all your stuff on the Escapist, and I think it would be fun to see what you could write with DC Universe Online. Or try giving Star Wars The Old Republic another try, I hear it’s been improved a lot.