Last week, a hedge fund announced it was betting medical device manufacturer St. Jude Medical’s (NYSE: STJ) stock was about to tank because hackers had found vulnerabilities in some of the company’s products. The fund based its position on the findings of MedSec, a cybersecurity research firm that shared its conclusions with the public.

STJ shares ultimately sank almost 5 percent, and have since mainly recovered. In an increasingly digital world populated with hackers, such incidents are inevitable. What makes this one particularly newsworthy is how MedSec handled its discovery.

MedSec’s decision to warn investors was audacious and controversial because of a little-known, unwritten truce that exists between security researchers, who seek to discover and exploit weakness in cyber products, and vendors. Some commentators say MedSec broke the truce.

That truce is referred to as “Responsible Disclosure.” Basically it says that, if researchers uncover a vulnerability in Company X’s product, they should tell the company about it confidentially before going public. That gives the company a chance to fix things first and protect the public.

If the truce collapses, it will have significant implications for cybersecurity. Criminals, gangs and nation-states conduct offensive research in order to level attacks. If they learn about a vulnerability before a company repairs it, rest assured they would exploit it.

These issues significantly impact my work as a police officer who fights cybercrime. Anyone who doesn’t abide by it could be in the market of selling attacks through the darker corners of the Internet.

Responsible disclosure is imperfect. It places too much responsibility on security researchers and not enough on the company whose product is vulnerable. Responsible disclosure ought to be, but often isn’t, followed by responsive repair.

Americans have become inured to tales of credit card and email breaches, but medical security breaches have a far greater impact. Whether it is a breach of medical records – our most intimate and sensitive data – or a vulnerability that affects the proper functioning of life-support systems, medical cyber security is where the rubber meets the road: the cost of wrong can literally be life and death.

Our society thus depends on integrity on both sides of this issue. Security researchers must have the integrity to provide the manufacturer confidentially with crucial information about its product. In turn, manufacturers must acknowledge, test and, if necessary, act on the information.

If a manufacturer fails, by being neither responsible nor responsive, then the responsible thing for the researcher to do is go public so that people know about the risk. Ultimately the market is the controlling force.

Justine M. Bone, chief executive officer of MedSec, said that since 2013, St. Jude Medical had been repeatedly warned by researchers and the government about vulnerabilities similar to those her firm claims to have recently discovered.

While acknowledging MedSec has financial incentive, Ms. Bone told me in an interview for this article that St. Jude’s failure to act previously was the primary reason her company decided to go public with its new findings. I believe her.

Note that MedSec’s statements on this matter have not been perfect: they have not been specific enough for other researchers, or the United States Computer Emergency Readiness Team, to validate. That is a problem that others and I have requested MedSec to address in the coming days. MedSec tells me that if they were more specific, they would be irresponsible. “We believe this should be addressed by St. Jude,” Bone told me, “but we were certainly not going to print a ‘How-To’ guide for people to potentially exploit this.”

Yet St. Jude’s response was almost a perfect example of the kind of frustration MedSec states it has with the vendor. In it, they misleadingly point to a global standard as if it is proof of a bespoke security program; they hand-wave around specific weaknesses such as unencrypted operating system files and allegations of howling authentication weaknesses, while asserting "remote monitoring" is safe and effective. No one challenged the utility of remote monitoring. MedSec alleges significant flaws in St. Jude's specific implementation of it.

That marketing language, along with a blanket denial of the most specific of charges is in my professional opinion an indication that the company does not intend to address the issues raised, but instead, the reputational and share price fallout.

For some contrast, consider what happened this month to another publicly traded company that manufactures a device on which lives depend: the Axon police body camera system by Taser International (NASDAQ: TASR). Reggie Dodd, a security researcher from Maryland’s Montgomery County, discovered vulnerabilities in some of the code in Axon docking stations. If exploited, they might have serious consequences.

According to Mr. Dodd, Taser’s information security department immediately acknowledged the vulnerability and within a day committed to fix it. Taser globally deployed updated code within two weeks. Mr. Dodd then published his findings.

This was the first time an external researcher had reported a vulnerability to Taser.

Jenner Holden, Taser’s vice president of information security, told me his group had planned for such an occurrence, and in fact were already in the process of instituting a “bug bounty” – a program that will pay researchers who discover and disclose responsibly.

Taser has been a model of how to engage in responsible security collaboration by ethically responding to vulnerabilities.

Just as news reporters were not irresponsible when they pointed out price gouging at EpiPen maker Mylan (NASDAQ: MYL), security researchers act ethically and responsibly when they share their findings when the process fails. If a company historically has not upheld its end of the bargain by fixing vulnerabilities, public disclosure is the proper route.

Security research is complicated stuff; it doesn’t lend itself to sound bites. The security community certainly has cried wolf in the past. Yet attempting to saddle researchers with all the responsibility for the disclosure of vulnerabilities, when they have no authority to compel vendors to do the right thing, is just plain wrong.