Posted
by
CmdrTaco
on Tuesday July 27, 2010 @09:20AM
from the yesterday dept.

nk497 writes "When it comes to security flaws, who should be warned first: users or software vendors? The debate has flared up again, after Google researcher Tavis Ormandy published a flaw in Windows Support. As previously noted on Slashdot, Google has since promised to back researchers that give vendors at least 60-days to sort out a solution to reported flaws, while Microsoft has responded by renaming responsible disclosure as 'coordinated vulnerability disclosure.' Microsoft is set to announce something related to community-based defense at Black Hat, but it's not likely to be a bug bounty, as the firm has again said it won't pay for vulnerabilities. So what other methods for managing disclosures could the security industry develop, that balance vendors need for time to develop a solution and researchers' needs to work together and publish?"

... and posted them elsewhere. So here's a quick copy paste and what my thoughts are.======================Procedure :Step 1) notify manufacturer of flaw

Step 2) Wait an appropriate time for response. This depends on the product. OS could be as much as months depending on how deep the flaw is. Web-browsers probably 2-3 weeks.Corollary 2a) If manufacturer responds and says its a will-not-fix you have some decisions, see 3a.

Step 3) If no response, make an announcement of doing a proof of concept exhibition with a very vague description. People asking for details say it was probably as vague as possible. The company has already been contacted, so they know the issue or can contact you from the announcement. Schedule it with enough time for the company to release a fix.Corollary 3a) How critical is the flaw. If marked as will-not-fix and its very detrimental you might have to sit on it.

Step 4) Do exhibit. With luck flaw has been fixed and last slide is about how well manufacturer did.

Step 5)...Profit!!!! (While this is the obligatory joke post, Check out E-Eye security to see how it's happened before)===============WRT to 3a: You'd be surprised how often this is done. There are two long-standing issues against a certain software that, while being uncommon and not often thought of attack vectors, are less than trivial to exploit and gain full access. Manufacturer has, in fact, responded with a "works as designed, will not fix." People in the information securities industry have found the flaws so detrimental that they've imposed a self-embargo about openly discussing it. Without manufacturer buy-in, a fix just can't come in time if that particular information was released and the effect would be significantly widespread. The only thing releasing the information would do is cause a massive Zero Day event that would only harm consumers or leave them without the services of the software for several months. With no evidence that the exploit is being used in the wild, save for handful of anecdotal reports, the issue has become a bi-annual prodding of the manufacturer.

WRT WRT 3a: So the industry and the manufacturer are basically patting each other on the back, happy in the knowledge that if no-one from the club talks about the problem, it's impossible to discover otherwise? It's going to be slightly icky to say "we told you so" when this is discovered independently and causes "a massive Zero Day event that would only harm consumers or leave them without the services of the software for several months." (Note that I used "when this is discovered", not "if". As you may be aware, if something could be done, it's only a matter of time until somebody does it)

I sure hope they were NOT paid;
it would make them part of an
conspiracy to cover up flaws.
And, when someone uses that flaw
would make them and the Companies
they work for possibly liable to
large amount of damages and
possible jail time.

It's probably worse than that. GP didn't give us much to go on about the nature of the attack, but generally a flaw described in such severe terms either (1) offers a foot in the door for the attacker to go after other systems on the network, or (2) exposes sensitive information. By contrast with flaws that allow DoS (for example), it isn't typically obvious when a flaw of that type is exploited.

So the question isn't "how do you know someone won't discover the flaw? what will you do when you notice it bei

>Remember basic security, tell no one who you are, and don't go attention-whoring after you release.

You've identified the real issue, but this is often ignored. The problem isn't the disclosure itself. The problem is that so many people with such disclosures to make, seem to want credit/attention for their efforts, but also want to be free of the risks associated with seeking that attention. Anonymous channels exist. Release information via one of those, and then if somebody is upset about it, they c

You need to notify CERT, and then they have the ability to apply more pressure on the manufacturer, as they simultaneously publish a very vague notice to the community of a flaw being worked on. If CERT is involved you have a much higher probability of not being ignored or told "will-not-fix" because it is already public knowledge that there is an exploit that needs fixing. Its in the record. The official "report cards" for the vendors then have the clock start ticking the minute you report the flaw, and the vendor can not deny that they were notified and/or aware of the problem. In other words, they can't sweep it under the rug very easily, and you have done the best you can do without causing mass pandemonium.

So, your "people in the information security" are basically helping the vendor selling faulty software while withholding crucial information from users of said software at the same time? If the issues you mention are indeed "less than trivial" you help the vendor to cheat people into thinking that they are safe with the software.

"People in the information security" have the job of making the IT environment safer. You must force the vendor to fix these holes even if it takes a vulnerability disclosure and a

Never said it was an easy position. Not one of them likes the situation they are in, but no one has been able to come up with a good solution.

To be fair I think as wide spread, detremental, and unknown as these two problems are it's very unlikely that there are more than a handful of such cases in software out there today and it's only done in the most extreme of situations. At least, that's my sincere hope.

As for it being used right now. To be honest we don't know that it isn't. But generally, it's a widel

lets see, time to patch any major portion of GNU/linux, probably less than 2 weeks. thats from bug report to update in distros repos. If OSS can do it with free labor in 2 weeks, paid for devs should be able to do it in less, say half, 1 week. I know my apple keyboard wasn't fully supported when i bought it, less than a week later there was a patch applied to mainline stable kernels that corrected the issue. and that was just for some of the FN keys not working as advertised. So a month max sounds good. I w

Huh? If there's a severe vulnerability and the manufacturer refuses to fix it, you should release it immediately. Then at least those affected can mitigate their vulnerability. Otherwise, the black hats have free reign.

Quote: The only thing releasing the information would do is cause a massive Zero Day event that would only harm consumers or leave them without the services of the software for several months.

---

So you prefer the alternate option, where you sit on it and only the black hats have access to the zero day event that would harm consumers and leave them without services of the software for several months.

I see the wide difference.

You would prefer you kept your exploits open and vulnerable, so no one can protect t

"Unsupported OS" means "unsupported OS." The vendor disavows any responsibility for bad things that happen when using their software on your unsupported platform.

This is a common thing for software vendors to do to close out tickets quickly. If it's an unsupported scenario (hardware, software, use case, etc.) then they can close it and keep their average ticket lifetime down.

A little shady, I guess, but if they never claimed to support your platform I don't see what you could really complain about.

I don't see that as being a very easy attack vector to exploit. If the attacker is to the point where he can install a guest OS on your VMWare server, you were already completely owned. If it's a disgruntled sysadmin then the solution is "fire him/her and change the passwords". So...yeah, unsupported OS.

Now, if you find a way to exploit that from a *guest* OS that was support and got owned (like from within Windows Server 2008 or something) and you can run something that blows up ESX, then that may be a

Time after time it's been proven that the safest security is the security that is shrouded in the most mystery. Why can't anyone hack Windows 7? Because it's new and no one knows how it works. People like Ormandy are a bane to the community because they steal code from Microsoft (there is no other way they could know about these flaws) and then once they stolen it, they release it for virus writers to hurt the common man. They are a public enemy and I'd suspect he has contacts inside Microsoft (if you're reading this Steve Ballmer, I suggest you begin purging those who doubt you and those closest to you).

I cannot believe Google would show support to someone who is most obviously a criminal aiding and abetting other criminals.

Nobody wants their source code shown to malware writers for obvious reasons so let Microsoft have its privacy. Why do individuals get privacy rights but not Microsoft? Did you ever stop to think about that? No, you didn't, because you were too busy helping the bad guys.

You should never reveal a security flaw. It's called common sense about saftey and protecting everyone around you.

I can't tell if this is sarcasm or not. The US never revealed the security flaw for ENIGMA because they were using it against the Germans, while the Germans believed ENIGMA was secure and unhackable. We had them by the balls.

Actually it was the Poles and the Brits who broke Enigma: the USA broke the Japanese codes. Irrelevant in any case though. The Germans had developed Enigma themselves and were using it only internally: there were no trusting "users" at risk.

The vulnerabilities are the same regardless of who is at risk. The argument is that only 'good guys' are able to find vulnerabilities, and that 'bad guys' don't find or can't keep hold of such information, or just can't use it. The GP purports that keeping problems a secret will never result in secret underground cults developing a cohesive, structured approach to abusing those problems.

Tell both. But if you announce something, please doc how you did it and don't brush off the vendor. (Email from users and press can get pretty thick after you announce something - if you're ethical and really want to fix the problem all that noise should be lo pri...)

yep, thats under the assumption though the bad guys need the good guys to tell them the holes. But even so, if win7 can be killed by a packet on port X, It's simple for the users to mitigate upstream by blocking port X at the firewall(you do have a separate one right?)

If there is no way to mitigate it outside of a vendor patch, let the vendor know first, and tell them they have say 2 weeks to be making progress...

Also this all reinforces my belief that commercial software development needs to work like engi

Simultaneously you mean? That leaves the vendor no time to fix the flaw.

Simultaneously you mean? That forces Microsoft to fix the flaw, instead of letting it stew for years or decades.

Fixed that for ya!:-)

Sarcasm (even if true) aside, the simple fact is, the largest problem with any of these scenarios is the ill will Microsoft has caused in the security community. Regardless of who wants to argue about it being caused by the complexity of the products, or the lack of willingness of Microsoft to fix issues, or a combination of both, the simple fact is, Microsoft has, in the

I agree with MS on this, deadline always isn't feasible. They have to test on many different levels before they could release the update. Google just used Ormandy to have some positive PR on themselves. Frankly, from my point of view, Google screwed this one up and Ormandy or any other researcher cannot hold companies at gun point to release fix asap. If he had given them 60 day disclosure and even after that, if MS had not provided any response then releasing the bug details would make sense.
The way Orman

A deadline's always feasible. It may not be possible to come up with a clean fix in a short timeframe, but you can always come up with either a workaround or something the users can do to mitigate the damage. This may not be ideal from the vendor's point of view, but it's not the vendor who's in danger of having their systems attacked so I'm not overly concerned about their public-relations heartburn.

A deadline's always feasible. It may not be possible to come up with a clean fix in a short timeframe, but you can always come up with either a workaround or something the users can do to mitigate the damage.

This may not be ideal from the vendor's point of view, but it's not the vendor who's in danger of having their systems attacked so I'm not overly concerned about their public-relations heartburn.

If you are not concerned about the vendors public-relations, then why release at all? It seems to me that the justification for release is precisely that the researchers ARE concerned about the vendors public-relations.. intent on harming it.

Its end users that dont follow security issues that are most at risk, where the releasing of exploits hurts them pretty much directly and immediately.

If its a critical bug in software that a typical grandma (and other non-geeks) uses, I claim that it is ALWAYS irresponsible to release the details of the exploit into the wild. Every single time, no matter how much time has passed waiting for a fix. This belief is formulated on the premise that the vendor's public-relations dont mean shit either way, that its the end users that mean something.

to threaten the guys who find vulnerabilities with jail time or fees. I uncovered a major security flaw in a piece of software (allowed an attacker to spawn a shell as root with extreme ease) and also found a way to circumvent the DRM and what happened.... I got stiffed. Instead of FIXING the problem (which is still intact to this day) the company attempted to sue for copyright infringement, among a few other "charges". Luckily, I had a great lawyer and I had documented EVERYTHING from 0 to 60. I was lucky.

This makes me sick. One minute, corporations are talking about providing "rewards" for unearthing flaws/vulnerabilities and then the next, they are trying to sue for every penny. If it wasn't for us, their systems wouldn't last a week without some script kiddie coming along and bringing the whole thing to it's knees.

It's interesting that the talks center around the responsibility of the researcher and the vendor, but often little attention is paid to the responsibility of the user. Are they as liable? For example, if a manufacturer sells a door lock with flaws but the user keeps the windows (ha) open and someone on the street shouts, "Dude, you're using a Schock Pax H23 and it can be opened with a loud scream!" who is responsible?

As primarily a Linux user, I used to think that the tools just didn't exist on Windows t

Never, ever a responsibility. You didn't write the bug, you didn't miss it in testing, you didn't release it. You owe the developer nothing.

The only ethical consideration should be your sole judgement about the best method to get a fix in the hands of vulnerable users.

You don't like that, Microsoft? Then do you own vulnerability testing and don't release software with vulnerabilities: the problem goes away overnight. Until then, sit down, shut up, grow up, and quit your bitching about being caught with your pants down.

The flaw in this thinking is that it's not the developer who is ultimately harmed by a disclosure... and I rather doubt that the x-million users of the software will appreciate that you released the information for their own ultimate good.

Not owing someone something, doesn't mean you can act without regard to that person. I don't owe you anything, but I still have to stop at a crosswalk if you're walking through it.

The question isn't "do I owe you anything?" as though disclosure were inaction and delaying disclosure were action I might undertake as a favor. Disclosure itself is an action, and the question is "if I do this, am I liable for resulting harm that may befall you?"

> If a court were to find that a specific attack occured because of your> disclosure and would not have occured otherwise, you may be held partially> liable to that attack's victim even if your disclosure ultimlately prevented> many more attacks.

Not likely in the USA. Absent a contract you have no duty not to utter true statements.

Interesting... are you talking about how things are, or how you want them to be?

The reason I ask is, if such a blanket statement were a true description of civil liability, I don't think the EFF would spend so much time talking about how to limit your liability when you publish a vulnerability (i.e. utter true statements).

Publication of truthful information is protected by the First Amendment. Both source code and object code are also protected speech. Therefore truthful vulnerability information or proof of concept code are constitutionally protected.
This protection, however, is not absolute. Rather, it means that legal restrictions on publishing vulnerability reports must be viewpoint-neutral and narrowly tailored. Practically spea

A security researcher has no particular duty to users either, but some may assume one for themselves. If so, releasing depends upon whether you're suspicious that exploits exist in the wild.

If bugs are actively being exploited, they are most likely being exploited by the worst people, so publicly enabling all mostly harmless the script kiddies will help matters by forcing the developer to issue faster fixes, possible in multiple stages. If a bug isn't be exploited, fine just tell the developer, and publis

The flaw in this thinking is that it's not the developer who is ultimately harmed by a disclosure... and I rather doubt that the x-million users of the software will appreciate that you released the information for their own ultimate good.

The current users may not appreciate it, but then they may also decide to find a better vendor if they are more acutely aware of the time that the vendor has had to fix the problem.

If you find a brand new vulnerability and go straight to IRC with it you are not just hurting Microsoft or sticking it to the man. Your hurting everyone that runs that software. You are also creating bigger botnets which can then be further used in DDOS attacks and extortion attempts etc... So in effect you are damaging the Internet and making it a bigger cesspool. There are ethical issues around vulnerability disclosure. You strike me as the type that collects bots and so probably don't care but the rest o

There's also another ethical issue: keeping me (as an administrator of vulnerable systems) in the dark about the vulnerabiility puts my systems at risk and prevents me from protecting my systems. You are hurting me in a very direct way by not disclosing the problem to me. If I know the problem exists I can for instance shut down the vulnerable services (if they aren't necessary for my systems to operate), block access to those services at the firewall and/or replace the vulnerable software with equivalent s

I can for instance shut down the vulnerable services (if they aren't necessary for my systems to operate),

Why are the services running if they aren't necessary?

Someone should have presented a business case for every process running on the server. Some of these are trivial ("without a kernel, the server won't run"). But there shouldn't be any 'nice to have' or 'may come in handy one day' services running.

In a lot of cases they're convenient but not necessary. I'd prefer to run them, they make life simpler for everyone, but I can live without them if that's what's required to keep things secure. Eg., webmail or web access to a ticket tracking system. They're nice to have, and there's a compelling business argument for having them available as an alternative to dedicated client software, but not such a compelling one that we'd be willing to sacrifice security to have them. So if they're secure we want them ru

Why are you running unnecessary services on a bastion host? You are not a very good administrator if you are doing that. Also what if there is no workaround and you need the service to conduct business you are basically screwed. At the very least the disclosure lets the whole world know that your are vulnerable until you get around to implementing the workaround on a presumably high profile production machine that's probably high risk for in line changes. Some favor.

Problem here: you're assuming that the disclosure causes the problem. That's incorrect. I was just as vulnerable, and just as likely to have someone exploiting the vulnerability, before the disclosure as after. If the researcher found it, odds are the black hats found it long ago and have been actively using it. But now I know about the problem and know what to look for to see if we've been breached (or are in the process of being breached).

"If you find a brand new vulnerability and go straight to IRC with it you are not just hurting Microsoft or sticking it to the man. Your hurting everyone that runs that software."

Uh... no. The one hurting the user is the company that didn't put enough effort into their development and QA practices. The one that prevents other market rivals to offer a properly engineered work by going cheap against them.

It's funny how big corporations are able to mutate public opinion in such weird ways and they even get s

Once you suspect a security flaw, flare a public mailing list with developers on it. Ask them for help tracking down the issue, until you as a group determine if you've discovered a hole and get a proof of concept running, all in public discussion.

No one is bright enough to find a security whole that couldn't have been discovered elsewhere before. So it's pretty likely the flaw is either known to the vendor who might not have had seen the need for fixing this, or it is known to an attacker, who already uses the flaw and just didn't appear (yet) on the radar of any researcher or the vendor. As it might be possible that you yourself are monitored by anybody else, your finding might be in the open that way. So it makes no sense in keeping the public

Do not give bad guys the possibility to learn about a flaw earlier than the users who are affected. If you don't publish the flaw, there is a certain possibility that it will be sold at black markets and kept secret to be able to use against customers. You can see that full disclosure groups are targets of commercial crackers. Full disclosure is like destroying business of criminals.

A customer should always be aware of a flaw and know how to protect himself against it.

The problem with "responsible disclosure" is being allowed to do it. Reporting a bug to a vendor might get you a "fix" response (best case), might get you ignored (average case), or might get you hit with a Gag Order lawsuit (worst case). Disclosing the bug after the worst case can get you arrested and even if you manage to avoid jail, you have spent a lot of money in defending yourself. This is the reason behind the full disclosure movement, to prevent vendors from gagging researchers who discovered bug

>Whenever you damn well please unless you are contractually obligated to do otherwise.

And "contractually obligated" necessarily involves an exchange of valuable consideration (e.g., they give you money in return for your agreement to keep your mouth shut). In general, software EULAs are not contracts for exactly this reason.

As others already mentioned, first see what the people who released the software react.

On how long you need to wait depends greatly on their response and the security risk involved. If you found it, someone else might have found it as well.If you find a security loophole in ssh, and people do not react, I would say a week after notifying the ssh people with a proof of concept and there was not response.

Importand security risks generally take 2-3 days. The reason for this is that way all distro's can get the

if i ever run across a vulnerability in any closed source software i will submit that information anonymously to prevent the authorities from treating me as if i was a criminal or terrorist, the only exception to that rule would be if i found a vulnerability in something licensed under the GNU/GPL then i will simply submit a bug report through the regular channels or email the author of the software directly.

As long as the vendors get a grace period or as in some cases forever as a timeframe the incentive to fix the real issue wont go away.

The discussion about full disclosure/responsible disclosure is a side issue to the real questions. Why dont the vendors do proper testing before releasing software? Why do they refrain from fixing bugs they fully know about? Why should researchers take any responsibility for the vendors customers when its obvious the vendors wont think twice about security or Q & A?

In most cases you warn the vendor first, providing complete details including exploit code so they have no excuse for not being able to duplicate the problem. If the vendor won't acknowledge your report within a reasonable time (say 7 days), will not commit to a timeline for having either a fix, a workaround or a mitigation strategy for users within a reasonable time (say 14 days from acknowledgement, with the deadline being 30-90 days out depending on severity) or fails to meet the deadline, then you disclose to users including full details, exploit code (so the problem can be independently verified without having to rely on your word that it exists) and a recommended mitigation strategy. Demanding payment for the report is never appropriate unless the vendor has publicly committed to a "bug bounty" and your demand is what they've publicly committed to.

There'd be occasional exceptions to the above. If for instance the vulnerability is theoretical and you can't create actual exploit code for it, demanding the vendor fix it is inappropriate (by the same token, though, it's far less of a problem to discuss the problem in public if it truly can't be feasibly exploited). The deadline should be more flexible for less severe vulnerabilities. If the vendor has a track record of responding inappropriately to reports (eg. by threatening legal action against the researcher), immediate anonymous disclosure may be a better approach.

I found a ring 0 exploit in a popular operating system, whereby any unprivileged user-mode process could get ring 0 access. It's been about a month since I told the developer, and they haven't said when a fix would be coming.

It's a ring 0 exploit, but actually turning it into a a root exploit is annoyingly complex due to the design of this operating system. There is nothing computer-theoretic stopping it, just complexity regarding the way page tables work. The exploit gives ring 0 in your control very ea

If it's merely difficult, remember this: with computer code it takes just one single genius somewhere figuring it out to enable every 2-bit script-kiddie with a mouse and an ego to use it. Once there's a working method it's just a matter of packaging it up into an easy-to-use kit.

The trick, however, is in figuring out the method. Now, if you've got ring 0 then by definition you've got more power on the system than root has. If you can't turn this into user-mode root then I have to be skeptical of your claim

The No Code Publish approach seems reasonable to me: Publish flaw to everyone - including CERT, Vendors but include no code or exploit for anything publicly readable. Give the vendor your exploit code an a deadline after which you will publish the exploit. If no fix appears by the deadline then you publish.

If you're really a whitehat, tell the vendor first. This will keep the exploit away from blackhats while the vendor fixes the hole. Security through obscurity works, up until the time it doesn't. So if the vendor does not fix the hole quickly, and you suspect the blackhats are about to discover it, then you need to inform the people who are vulnerable to it. If possible without broadcasting it to the blackhats and script-kiddies. Yes, that's rarely possible, but if it's possible it's the right thing to

Giving the vendor an opportunity to apply a fix is all good and dandy, but any researcher must remember this:

Real blackhats don't wait around for a patch before they go on the prowl for systems to exploit. And they don't announce their discoveries in public.

Vendors are not only racing the "egotistic researcher" looking to score points by pulling their pants down, but also against the crackers looking to not only pull their pants down, but rape them in the ass.

Publish details about the bug as soon as you find it; publish an exploit as soon as possible. If every discoverer of security flaws did this, software devs would learn very quickly to have second thoughts about releasing unchecked code. I say that as a software dev.

Seriously, you think you're smarter than everyone else? That you're the only one who discovered a flaw? Puh-lease. The Chinese government alone is probably throwing more manpower at finding flaws in US software than there are developers in t

Contact vendor and a reputable third party, such as CERT, simultaneously.

Give vendor a -very- small window (at -most- a week or so) to respond, with (1) contact information; (2) an assigned issue identifier (at least one while triaging), and (3) a specific time frame till a follow-up response, not to exceed N days (your choice; 14, 30, 60, 90, etc.). This response does not need to be a full triage and verification, just a real response of "we assigned this to John Doe to research as issue number 54321-unver

If you get the point of Kerchoff's principle, you understand why *if* all things are equal *then* open source code is inherently better than closed source code because public disclosure finds the flaws in the source code faster so they can be fixed faster.

If you want to force a *proprietary* vendor to *immediately* fix a vulnerability, you have to disclose it to the public first, as the

What it sounds like to me is that MS thinks that you owe them. They're doing you a favor by fixing any problems that you find (and if it takes them 6 months, well darn it it's because they were busy patching another Windows activation exploit and that is certainly more important). Were I to find a security flaw with Windows, I would probably release it for all the world to see... without notifying MS. They have paid employees to find and fix these problems, guess they don't need my help.

How about giving the vendor time to issue a patch if said vendor has earned the goodwill of the community or at least not earned the ill will of the community? Abuse of monopoly as found in various courts of law? Immediately go public. Vendor lock-in practices? Immediately go public. Silly patent lawsuits over ideas that are not really original? Immediately go public. Public statements about how they now take security very seriously and it is a top priority for them and then no substantial improvemen

How about giving the vendor time to issue a patch if said vendor has earned the goodwill of the community or at least not earned the ill will of the community? Abuse of monopoly as found in various courts of law? Immediately go public. Vendor lock-in practices? Immediately go public. Silly patent lawsuits over ideas that are not really original? Immediately go public. Public statements about how they now take security very seriously and it is a top priority for them and then no substantial improvement? Imme

Microsoft should not be the ones that set the writing to law when it comes to any security issue. They have an extremely poor reputation as it is. What Microsoft defines should have little weight in the community where these issues are discovered and discussed. Over the years those that have uncovered these security issues have been well restrained. They seem significantly better equipped to deal with how and when they should be disclosed. If not for them, discovered issues would likely never be disclo

Microsoft should not be the ones that set the writing to law when it comes to any security issue. They have an extremely poor reputation as it is. What Microsoft defines should have little weight in the community where these issues are discovered and discussed. Over the years those that have uncovered these security issues have been well restrained. They seem significantly better equipped to deal with how and when they should be disclosed. If not for them, discovered issues would likely never be disclosed to the public, and the public would not be exerting enough pressure to get them fixed (let alone prioritized).

Agreed, it would be called a "conflict of interests" since their efforts in such regards would solely be to protect their interests, and not their locked in users to any extent not required to maintain their monopoly.

But, that has never stopped companies from buying laws in the past... sadly.:-(