I realize that looks odd. However, we made a decision a long time back to separate vulns on hosts from vulns on network gear. That's why you'll see the GUI focus exclusively on endpoint vulns, not device vulns.

Why? A couple of reasons. First, the work to patch/upgrade is generally done by different teams, using different methods. Second, our main focus has been to prioritize *host* vulns that have not been patched, using a quantified, repeatable scoring method that shows the relative badness of different issues the scanner found. The problem with network device vulns in this context is that they mess up the scores for hosts. Serious network device vulns are relatively uncommon, but when they hit, they are generally of the "all hands on deck" variety. That is, if someone can compromise a router and run arbitrary commands, any calculation we attempt for the downstream damage that can be done will be wrong, on the low side. Our standard scoring has a major factor from "leapfrogs" - bad guy hits host A, so we ask what the network allows or does not allow the bad guy to do from host A. But if instead they own router R, the concept of "leapfrog" barely even applies any more - the bad guy now pwns the network, so asking what the network allows or does not allow is moot. Given control on a router, you can scramble the very network defenses we're trying to analyze! (You can MITM, you can rearrange topo, you can inject fake control plane info to attract traffic to where you can read it, etc.)

So, long story short, the consequences of an arbitrary command execution vuln on a router or firewall are so bad that we decided to remove them from our calculation. If your scanner tells you your network fabric itself can be compromised, don't ask RedSeal to spend a couple of hours calculating the exact degree of badness - it's bad, we all know it's bad, so go fix it first.

This is different from host vulns, of course. They are much more common, so much so that every organization has some medium to bad vulns still unpatched all the time. For these, our quantified scoring approach works really well.

To push an analogy, imagine we have a rating scheme for knives, pistols, and rifles - it can produce good relative rankings of importance, depending on context. Then someone shows up with a nuclear weapon - how does that fit in the rankings? It basically doesn't - it's a whole other scale of damage.