Saturday, July 30, 2016

In my work at Rendition Infosec and SANS, I rely heavily on analogies to communicate complicated technical topics to those who often do not have a very technical background. I'd like to put those skills to use today to tackle the issue of ad blocker use.

Many websites, particularly "news" websites, rely on advertising revenue to stay afloat. If you block their advertising, they don't get paid, babies die, building burn down, and the economy crashes. So now a number of sites with very relevant content won't show you their content if you use an ad blocker. They argue that if you try to access their content with an ad blocker, you are stealing from them. On the surface of it, this sounds sensible.

But there's a rub. Many advertisements are malicious in nature. This is so common that "malvertising" is a very well known term in infosec. My post comes on the heels of the announcement of a huge malvertising campaign that hit 22 advertising networks. Patching third party products is a notoriously difficult proposition, so many of these malvertising attacks target out of date Firefox and Flash.

This guy doesn't use ad blockers, but as for the rest of us...

Seeing articles like this, I understand why people would want to use ad blockers. Think about the analogy to this in the real world. You need to go to the store to get some groceries. Unfortunately, the grocery store has a gang violence problem. The Crips control the dairy section and the Bloods control the produce section. Other gangs control the frozen food and dried goods sections. People you know have gone to the grocery store and been beaten or shot.

The grocery alliance wants to do something about the problem, but it's really hard. After all, they note that the gang problem is well known and people have been notified about how to appropriately dress to avoid infuriating the gangs (don't display gang colors). The grocery alliance says that the gangs only represent a threat to those who haven't patched their wardrobes (see what I did there?). If you only patched your wardrobe, there would never be an issue.

No, not that kind of gang.

Unfortunately, even those that have patched their wardrobes are more than a little worried about the grocery store gang violence. They want to shop using armored carts (that look something like tanks). The problem is that the grocery stores will all have to expand the width of the aisles for the armored carts. This reduces the amount of goods they can stock and causes all kinds of other costs they have to bear. The grocery alliance says that if customers want to use armored carts, they won't be able to make any money. They'll lose money providing customers with their groceries.

This puts consumers in a strange position. They certainly don't want to cause the grocery stores to go out of business, but they want to get our groceries and want them safely. When a website says you can't visit it safely, they are doing the same thing as the grocery store saying "no armored carts."The wrong plan - they're missing the point
Yes, some websites offer you the option to use an ad blocker if you pay for their content. But coming back to my grocery analogy, this is like the grocery alliance saying you can use an armored cart only if you pay for the privilege. Sure, you get your way and get to shop in relative safety. But this is ridiculous when it comes down to it. This would be like paying protection money to the mafia. As long as you pay us, we'll let you do your thing. Don't pay us and you'll run the risk of getting exploited.

This billboard, like publishers offering to remove danger for money, are missing the point

I understand the "no ad blocker" argument from businesses. But until online businesses get the criminal gangs out of their advertising networks, expect their arguments to fall on deaf ears. I'll keep using ad blockers (and avoiding the dangers of grocery shopping without an armored cart).

Friday, July 29, 2016

I went to see the new Bourne movie and it definitely has a cyber angle. I won't put any spoilers in this since I would hate to have someone ruin a movie for me. Well, not unless you feel like "the movie has bad hacking scenes and a lame Vegas car chase" is a spoiler. Note, you can get all of this from the trailer... There are some places that the producers seriously suck at getting cyber right.

"Use SQL to corrupt their databases"
In an early scene, someone in a foreign language I don't speak says something that apparently translates to this. I guess I can't fault them for this since you technically could use SQL to screw up a database, but I also can't imagine some hacker EVER saying these words.

Backdoors into CIA computers
No surprise, the heroes can hack into the CIA's classified mainframe from the Internet, because why not. Seriously, the CIA needs some decent termination procedures to revoke credentials from rogue agents and hardware tokens lost/destroyed in the field. Also, the CIA could stand to learn from businesses about terminating credentials for agents presumed dead. 'Nuff said.

One does not simply hack the power grid
When the CIA needs to turn off power somewhere, they just hack into the power grid and shut it down. Russia took six months in the network before they shut off the power. Maybe Russia just sucks at hacking when compared to our CIA counterparts.

Just install some malware
Malware is magic and can pretty much do anything you need it to. Just say the word malware three times and you can magically take over any computer anywhere. The only saving grace here is that nobody uttered the words "zero day" so I didn't throw up in my mouth.Hacking unknown cell phones, anywhere, and hot mic'ing them is trivial
Even when you don't know the phone number. I have to admit, even I was impressed when CIA hackers first found, then hacked, a cell phone in close proximity to a malware infected computer.

Don't rip off DEFCON
There's a total rip off of DEFCON in the movie, right down to some of the artwork. The story line didn't need it, don't rip off DEFCON.

Vegas geography - not for amateurs
Finally, and perhaps this is a nit pick point, a chase scene is shot on the Vegas strip. Since Hacker Summer Camp and many other conferences are held annually in Vegas, can we assume that much of the target audience know the geography? In one part of the chase scene they drive for miles and cover like .25 miles of landmarks. Later, they somehow teleport from Bally's to the Riviera. Of course the Riviera was closed in 2015 and demolished in June, but hey - details...

Parting thoughts
The movie is good overall, but like a lot of movies that are "good overall" this leaves a lot to be desired when it comes to cyber fiction vs. cyber reality. Medical films regularly reach out to real doctors to consult. Maybe it's time that producers of movies featuring hackers actually get advice from real hackers.

Thursday, July 28, 2016

Warrant canaries can be useful tools for letting users know that you have received a national security letter (NSL) that you would otherwise be unable to talk about. Without using a canary, you would be otherwise unable to legally let users know about the invasion of their privacy.

The idea of warrant canaries became very popular a couple of years ago and even spawned the CanaryWatch movement and website. But the CanaryWatch folks eventually terminated the project citing changes in wording, missed updates, etc. that communicated confusion among those examining the canaries. The project was still a success however since it got people talking and thinking about NSL's and other secret court warrants.

I raise the idea of warrant canaries today because a site I've used in the past, demonsaw.com, let their warrant canary expire today. This leaves me in an interesting position of wondering whether someone was lazy, someone was hit by a bus, or they were served with an NSL.

The first and last are concerning. If you say you care about privacy but can't set a calendar reminder, I'm a little concerned about your privacy street credibility. It's also sloppy and doesn't inspire confidence in the rest of your operations. If you've been served with a warrant on the other hand, I'm concerned about that as well.

Word to the wise, if you are going to deploy a warrant canary make sure you update it. Otherwise you're leaving your users in a confused state and possibly exposing sloppy internal practices. While a warrant canary can possibly increase user confidence in your operation, failing to update one does exactly the opposite.

Wednesday, July 27, 2016

Are password managers bad? Should I stop using one? These are questions I get from Rendition Infosec clients every time there's a vulnerability discovered in a password manager. The answer is probably "it depends" for most organizations. Overall, most organizations will see an uplift in security by using password managers, but you should consider where the password manager stores your passwords and who (if anyone) has performed vulnerability assessments on the storage and transmission of the passwords.

I bring this up because today Tavis Omandy has discovered and responsibly disclosed a vulnerability in LastPass. The bug is reported as a full remote compromise.

But as to whether you should use password manager at all? Most organizations are filled with people who absent a password manager will store their passwords in a spreadsheet named passwords.xls (looking at you Sony). Or passwords.txt. Practically any storage is better than that. Or they'll reuse passwords. And sooner or later some website using the shared username and password will end up compromised. With our luck they'll be storing passwords in plaintext.

If you are going to use a password manager, keep it up to date and ensure that you enable two factor authentication if it's available. If two factor isn't available, you need a new password manager. Don't let one or two bad outcomes scare you away from something that's a great thing for security. That's how the antivax movement got started after all.

Monday, July 25, 2016

In this post, I'm going to focus on the risk assessment and risk management recommendations. The Automotive ISAC offers the following recommendations for risk assessment and management:

All of these recommendations are important. But the one that is most overlooked (by far) in my experience is the recommendation to monitor the security compliance of critical suppliers. Let's face it, your suppliers' security will have an impact on your security. Failure to consider supplier and partner security can lead to a core network compromise - just ask Target: their network compromise was the result of an HVAC contractor.

So this sounds great, but how do you do it? At Rendition Infosec we recommend that you assess employer security with your contracts. Suppliers aren't likely to just turn over their security data to you unless you have contract provisions requiring it. Obviously, the bigger your organization is, the bigger contracting stick you can wield. But if you are the big fish and small fish want to do business with you, evaluate their security.

Some potential discussion points for suppliers:

Ask for the results of the last penetration test they did. If they give you a Nessus scan report, run away... Fast.

Ask what they've done to remediate vulnerabilities identified.

Send an email to security@supplier-domain.com and see who answers. How long does it take to get an answer?

Ask about the products deployed in their SOC. How is the SOC staffed?

All of these are indications about their security and things you should be looking for.

We've all heard the old adage "a chain is only as strong as it's weakest link." It's time to recognize that our supply chain is also only as strong as its weakest link.

Sunday, July 24, 2016

If you're following the shooting at a Munich McDonalds, you may have heard that the attacker used a unique tactic to lure victims to the scene of the crime. The attacker compromised the Facebook account of another user and posted under her account. He lured people to the McDonalds at 4pm, offering that he would buy them some food as long as it wasn't too expensive.

At this point, it's unclear how many of the shooter's victims were lured by this offer as he did not show up himself until 6:30pm. I suppose free McDonalds is a tantalizing offer to some, but it seems that two and a half hours of waiting is too long for even the most hopeful. Currently nobody knows why the shooter was 2.5 hours late, but this is a good thing for Facebook.

Infosec angles?

There are at least two infosec angles here to consider. The first is to use this event to educate users on watering hole attacks. These attacks are often devastating to organizations - mostly because users have a really hard time detecting (or even understanding) them. The second is to consider potential liability if you run a Facebook style service that offers the sort of messaging used by the attacker.

For user education, this is as real as watering hole attacks get. The basic premise we need to convey in user education is that when things appear out of place, we have to apply caution - even if you are in a supposedly safe place (like Facebook). Basically this is a situational awareness issue. The internet is not a safe place. When you are there, be careful regardless of the site you are on.

As for liability, this event should be a great conversation starter on potential liability you may suffer when a user utilizes your application. In this case, there's little reason to believe that the attacker was successful in killing any victims due to the Facebook post. But what if he had? While IANAL (I am not a lawyer, you should talk to one), there seem to be some potential factors that might impact liability, including:

Did the user's post contain obvious hate speech which could have been algorithmically detected and blocked?

If the post contained an obvious threat, was there a way for other users to report this to your platform operations team? What is their procedure for interacting with law enforcement? What is their response time for responding to submissions of threats?

There are probably a number of other questions that your legal counsel would direct you to consider, but as I said, IANAL. At Rendition Infosec, we ask clients to consider how they would respond to threats that they see in the news. After all, you are most likely to be impacted by the same attacks as others in your industry. Also, if it's happened before you can't very convincingly say "we never considered that."

What constitutes a "messaging platform" for the purposes of considering liability? I wouldn't just consider Facebook style applications. Anything with a message board, forum, etc. could be used to facilitate this sort of watering hole attack. If you have one of these (and most organizations do), talk to your internal counsel about your potential liability - and how you can take steps to reduce it.

Saturday, July 23, 2016

On Thursday the automotive ISAC released recommendations for increasing automotive cybersecurity. What can we learn from this? A lot it turns out - some good, some bad. It turns out that automotive cybersecurity isn't much different from cybersecurity anywhere else, so these recommendations are pretty universally applicable.

I'll focus mostly on section 4.0, titled "Best Practices Overview." The document focuses on a number of high level items, including:

Governance

Risk Assessment

Security by Design

Threat Detection and Protection

Incident Response and Recovery

Training and Awareness

I'll probably do some follow up posts, but for the moment the ones I want to focus on most are Security by Design and Threat Detection and Protection. Both of these contain very solid advice for most organizations, automotive or not. For instance, Security by Design focuses on the following areas:

None of these are bad and in fact few organizations I work with are considering all of these points. But the number one thing I see missing here is the lack of any recommendation/requirement for third party security testing. Ask a developer if they've written secure code and you know what answer you're likely to get. Internal testing teams are often incentivized to not make waves when reporting vulnerabilities. Even when there's no pressure, they often operate in an echo chamber and that's no good. Outside testers bring experience from other industries and manufacturers to bear against your product. And they're much more likely to bring the skills that real testers (i.e. hackers) will bring to your product later.

As for Threat Detection and Protection, the outlook is a little better.

Not surprisingly for an ISAC, we see the recommendation to report threats to appropriate third parties. This is a good recommendation in general and totally self serving for an ISAC.

But my favorite recommendation here is to identify how to manage vulnerability disclosure from third parties. Entities outside the organization can and will discover vulnerabilities in our products. If the security department can't effectively deal with these disclosures, we are doomed to fail. I have multiple recommendations that I share with Rendition Infosec customers, including:

Ensure that you have a security reporting point of contact on the website

Operators who answer the general "contact us" phone and email must know where to route security inquiries

Once the security department is notified, they should engage public relations

Develop a timeline for response and communicate that timeline with the entity reporting the vulnerability

Stick to the developed response/remediation timeline. If deviations are a must, clearly communicate that with the submitter.

This isn't a comprehensive list, but will get you a long way towards good.

I'll leave this here for now. Let me know on Twitter or in the comments if there's interest in more review of this document and I'll post a follow up. Overall, we should commend the Automotive ISAC for their security processes.

Wednesday, July 20, 2016

As you may have heard, the Ubuntu Forums website was hacked recently leading to the compromise of the details of about two million users. These details apparently do not include passwords (even in hashed form) due to the use of Ubuntu Single Sign On.

The source of the hack? Another SQL injection from a known vulnerability. It seems we can't go more than a few weeks without another one of these popping up. The vulnerability in this case was an out of date (and known vulnerable) vBulletin plugin. As I mentioned in the Drupal post last week, when your public facing content management system pushes a patch, you have to be ready to respond, even if this means taking a short unscheduled outage window. Otherwise, you leave yourself open to hack.

As I discuss in the SANS Cyber Threat Intelligence course, Threats occur at the intersection of Capability, Intent, and Opportunity. Your attacker has the intent and the moment the patch came out, advanced attackers started working on the capability. You alone control the opportunity by either patching or not. We've worked with customers at Rendition Infosec who have had public facing web applications attacked within 24 hours of the release of a patch, long before Metasploit had an exploit for the vulnerability. The attacker's actions on objectives lead us to conclude that most of these were targeted attacks. The attacker knew who they were compromising, had performed the recon previously, and were waiting for a vulnerability in the potential victim's infrastructure.

Ubuntu Lessons Learned

Separation of assets

According to the Canonical CEO in this blog post, Ubuntu was doing a good job of separating their code repositories from the forum servers. I would expect this in any company the size of Canonical, but frequently we see multi-use servers on client DMZ's and it makes me a little sick every time I see it.

Verdict: +1 Ubuntu

Reset system and database passwords

These probably weren't compromised according to the investigation, but were reset out of an abundance of caution.

Verdict: +1 Ubuntu

Updated vBulletin software to latest patch level

Sorry, you don't get points for patching any more than someone gives you points for brushing your teeth after chewing on an onion. It's just something you do, not something you get credit for. And you still lose points for being out of patch compliance in the first place.

Verdict: -2 Ubuntu

Added ModSecurity to mitigate SQL injection attacks

Smaller organizations get points for deploying a web application firewall (WAF) but not so much a company the size of Ubuntu. We would have expected they would already have a WAF in place, especially the free ModSecurity (which ironically they could sudo apt-get install for basic protection). A WAF won't fix your patching problems, but will provide some basic protection against SQL injection attacks. Don't rely on it though, like a seat belt it's only there to help soften the blow. You can still die in the collision.

Verdict: +2 Ubuntu for deploying the WAF, good defense in depth. -1 for not already having one deployed.

Broad Announcement

I feel like this story almost slipped under the radar. I didn't get an email about it. Then again, Google may have auto-filtered it as spam. Breach notifications are getting to be like Nigerian prince emails they come so often... When I read the story, I headed over to ubuntuforums.org and didn't see any notification at all on their website. This is of course bad form. Even when you think nothing was compromised, you are better off informing your users - on the actual site that was compromised - not a blog on another domain entirely.

Verdict: -1 Ubuntu, just because you don't think it was a big deal doesn't mean you get to pretend it didn't happen.

Conclusion

Learn from Ubuntu's missteps here and you can make sure your customers have a better "breach experience" than Ubuntu's did.

Friday, July 15, 2016

You may have noticed I've not been blogging as much recently. There's good reason for it I assure you - mostly involving some crazy work schedules. I am still weighing in on current topics, but through a different venue. Earlier this year, I accepted a position on the Editorial Board of SANS Newsbites. If you're not familiar with Newsbites, you should definitely subscribe. Twice a week, you get information on top infosec trends with commentary from practitioners in the field. Best of all, it's free. Anyway, my schedule is starting to free up a little bit and I'll be doing more blogging while contributing to NewsBites. Last month it really came down to one or the other and I chose Newsbites. I'll do better from now on, I promise.

Thursday, July 14, 2016

If you have a Drupal website, patch now. There are three different vulnerabilities for which patches have been released, all of them have potential for remote code execution.

For those that don't know, Drupal security uses a security scoring model that is different from CVE. Rather than score vulnerabilities on a scale of 10 (10 being the worst), Drupal uses a scoring system based on the NIST Common Misuse Scoring System. You can read more about that system here. Alternatively, you can just accept that it's a scale from 0-25 (25 being the worst). Of the three vulnerabilities, the lowest score is 17 and the highest is 22. Any of those should give you pause.

The good news is that all installations are not vulnerable. One vulnerability requires the Coder module to be installed (though not necessarily in use). Another (the most serious IMO) requires REST services to be enabled. A quick survey of clients who use Drupal indicates that this is a popular module to have enabled. In other words, if Rendition Infosec clients are a representative population, this vulnerability is pretty serious.

Short term action items
The advice to stop what you are doing and patch now should be obvious, but I'll say it anyway. Stop what you are doing and patch now.Long term action items
Now on to the longer term advice. Look at your patching program. If your patching program can't keep up with vulnerabilities like this, seriously consider how you can improve it. Drupal put out a PSA 12JUL16 that they were releasing the patches 13JUL16. That's not much heads up. But then again, we're talking about three different RCE vulnerabilities. I'm happier that they provided a 24 hour heads up than none at all and definitely happier than having them sit on the vulnerabilities.

Even if you don't run Drupal, you need to consider how your shop would respond to a similar vulnerability where you had limited time to patch. If the answer is that you need to convene a change control board meeting in two days to talk about the implications and then schedule an upcoming outage window to apply patches, you'll likely have been exploited by the time you apply the patch. Even if your house isn't on fire, now is a great time to fire drill.

Tuesday, July 12, 2016

Yesterday I published a post on the recent 9th circuit court of appeals case where it was effectively upheld that sharing your password to bypass restrictions is a crime under the CFAA.

If you are interested in the ruling on the case, check it out here (PDF). The ruling was not unanimous. A three judge panel ruled 2-1 in favor of upholding the original conviction under the CFAA statute. While not unanimous, it still sets legal precedent until the supreme court hears the case (or one similar).

It's clear from their blog post that the EFF disagrees with the ruling. But in my opinion they seem to be overplaying their hand quite a bit. They seem to think the ruling criminalizes sharing a password in general. But the real issue at play is whether you, an authorized user of a computer system, can delegate your authorized access to a third party. And the 9th circuit seems to say "no, that's not okay."

The EFF tries to bring in some pretty unrelated examples to bear here. Let's examine them one at a time.

1. A husband tries to pay a bill using his wife's banking credentials
Nobody has been defrauded here and there's no apparent harm done. The wife delegates use of her account to use her resources to pay a bill.

2. A student uses a parent's Hulu or Amazon account password
Presumably this is wrong. The student uses the password to avoid paying for their own account. Nothing left to say about this. It's wrong plain and simple. Amazon or Hulu are being robbed of additional revenue from this unauthorized account sharing. Should it be a federal crime? I hate to weigh in on this - but I think when the situation is examined objectively we can agree this is wrong.

3. Someone checks Facebook for a sick friend
If the sick friend provided authorization, I can't imagine how this is going to be considered a crime under the CFAA, despite the EFF's posturing. The user providing permission to the account "owns" the data (though Facebook clearly does as well).

Closing thoughts
The EFF provided one example of bad behavior and two examples that clearly aren't related to the case at hand. But think about this from the perspective of a system owner. You provide access to an employee who is now an authorized user. The authorized user then shares their password with some random stranger you didn't authorize. Are you okay with that? Of course not. Did they have the authority to share your system access? Of course not. The EFF also wants to split hairs about whether the situation changes if the person is someone who was previously authorized to access a computer system (e.g. a former employee). But if the person knows (or reasonably should know) that their access has been revoked, I think the situation is just as clear here. Common sense should rule the day.

The EFF also has a nice collection of documents on the case, including briefs that they filed. Regardless of which side of the issue your opinions lie, those briefings represent legal opinion while the ruling represents law.

Monday, July 11, 2016

Is sharing your password a federal crime? It turns out the answer is probably yes, at least for the moment. The ninth circuit court of appeals says sharing your passwords is a violation of the CFAA. Currently Netflix, HBO, and others aren't lining up to identify users who are sharing passwords. But should you be concerned?

We all have something to hide
It's no secret that law enforcement often uses anything they can to hold a suspect in custody or get probable cause for a search warrant when they are trying to develop evidence in a larger case.

If you think "that's okay, I'm a good person, I have nothing to hide" think again. Moxie Marlinspike wrote an outstanding blog post dispelling this myth some time ago. If you haven't read it, do so now, then come back and finish this post. No, really - it's that important. Bottom line, most of us are breaking one obscure law or another practically every day - often without having any idea we are breaking the law - we all have something to hide...

Law enforcement targeting
Suppose for a moment that you - through no fault of your own - become a suspect in a murder investigation. The police get a warrant for your phone, but there's no direct evidence there. The police need warrants for other computers and digital devices (yes, I know it's a stretch that they'd have a warrant for the phone and not the rest of your devices). But suppose for a moment that you are using a shared password for Netflix viewing. They determine this and use this leverage under the CFAA to get warrants for many more digital devices.

Alternatively, suppose law enforcement is trying to coerce a witness to testify against a criminal (I mean encourage, I'm sure law enforcement never coerces a reluctant witness). They determine a reluctant witness's family member is using a shared account for Netflix. Law enforcement leverages this knowledge. "We can forget about CFAA charges if you make it easier on everyone and testify." There's even a possibility that law enforcement is able to query HBO Go and/or Netflix to find suspicious account usage and find a suspect family member from there. A long shot? Sure. Unthinkable? Not at all.

Employee misconduct investigations
Misconduct investigations happen all the time at larger enterprises and most DFIR professionals will be involved in one sooner or later. Your employer has it out for someone - often for good reason - and is ready to fire them. But to prevent a wrongful termination suit, they're looking for more evidence of wrongdoing. Sometimes that manifests itself when you find they've been watching Netflix on their work machine. With a good acceptable use policy (AUP), this is a slam dunk for employee misconduct.

But picture this: you're doing an employee misconduct investigation and discover that your employee is logging into Netflix on their work machines using someone else's account. What sort of leverage does this create for the employer? Well, when you can threaten to report someone for a federal crime if they try to fight a termination, that's a great position to be in. But it's not just someone you can report - technically it's two someones - the employee using someone else's account and the person who shared the password in the first place.

It may sound like a mean thing to do to a suspect, but at Rendition Infosec we put our clients' best interests first - so I'll definitely be putting more emphasis on evidence of password sharing in my future investigations. It's also probable that this ruling could be applied retroactively - meaning that if you are working a case today, you could use this to discover shared account usage from a year ago and still at least threaten that this is in scope. At the end of the day, the employee is likely to roll over and accept their walking papers rather than risk federal charges - even if they don't think they did anything wrong.

Conclusion
The CFAA is an extremely disturbing piece of legislation. This particular application of CFAA charges is a disturbing evolution to say the least. The worst part is that those who with low incomes (i.e. those least able to afford online subscription services) are the most likely to share account passwords. Those who know me know that I'm no SJW, but from where I sit this application of the CFAA is just wrong. It's time for some legislative overhaul since the courts clearly can't apply common sense to this extremely poorly written law.

Sunday, July 10, 2016

When reviewing security plans for Rendition Infosec customers, one of the points we like to talk about is domain management. In other words, who manages the portfolio of domain names owned by the company? Who ensures that they are updated appropriately? What about certificates for those domains? Is the company monitoring for typo squatting derivatives of those domains? Finally, where are those domains hosted? In house? In an offsite colocation facility? With a third party hosting service? Inventorying all of these items and having points of contact for this information is an important part of any security program.

I'm highlighting this because TP-Link apparently didn't get this message. They forgot to renew www.tplinklogin.net and tplinkextender.net. Both of these domains were used to help users configure their home routers. A domain reseller noticed these domains had expired and registered them. But the reseller isn't giving them back to TP-Link. Instead they are offering them for sale at some pretty steep prices (though the exact price isn't listed).

Because these are home devices, we don't expect this vulnerability to impact businesses. And to be honest, I don't think it is likely to impact many home users either. First, the domains would have to be purchased and then the attacker would have to redirect users to a malicious site. Even if the user enters their password (hint: it's probably admin), the attacker probably can't do anything with it the user has enabled remote administration (it's off by default).

So while the risk for users in this particular instance is low, forgetting to renew a domain can have some pretty obvious career limiting impacts. The brand damage that could be inflicted by an attacker or a competitor is obvious. A well documented domain management program will prevent that outcome.