Data Diving

What lies untapped beneath the surface of published clinical trial analyses could rock the world of independent review.

By Kerry Grens | May 1, 2012

TIP OF THE ICEBERG: Independent reviewers of clinical trial data have access to just a minuscule percentage of the actual information.Pushart

TIP OF THE ICEBERG: Independent reviewers of clinical trial data have access to just a minuscule percentage of the actual information. PUSHART

A few weeks before Christmas 2009, the world was in the grip of a flu pandemic. More than 10,000 people had died, and roughly half a million people had been hospitalized worldwide; tens of millions had been infected. In the United States, millions of doses of Tamiflu, an antiviral medication, had been released from national stockpiles. “December 2009 was a point in the H1N1 outbreak where there was a lot of talk about a second or third wave of this virus coming back and being more deadly,” says Peter Doshi, now a postdoctoral researcher at Johns Hopkins University and a member of an independent team of researchers tasked with analyzing Tamiflu clinical trials. “Anxiety and concern were really peaking.”

So it was no small blow when, that same month, Doshi and his colleagues released their assessment of Tamiflu showing that there was not enough evidence to merit a claim that the drug reduced the complications of influenza.[1. T. Jefferson et al., “Neuraminidase inhibitors for preventing and treating influenza in healthy adults: systematic review and meta-analysis,” BMJ, 339:b5106, 2009.] Their report had been commissioned by the Cochrane Collaboration, which publishes independent reviews on health-care issues to aid providers, patients, and policy makers. The findings, published in the British Medical Journal, made headlines around the world.

Doshi’s group arrived at this conclusion because they’d run into a lack of available data. Some of the widespread belief that Tamiflu could blunt pneumonia and other dangerous health consequences of flu was based on a meta-analysis of several clinical trials whose results had never been published. Because the data could not stand up to independent scrutiny by the researchers, these trials were tossed out of the Cochrane review; other published trials were disqualified because of possible bias or lack of information.

Just as the 2009 BMJ paper was to be published, Roche, the maker of Tamiflu, opted to do something unorthodox—the company agreed to hand over full clinical study reports of 10 trials, eight of which had not been published, so that independent researchers could do a proper analysis. Within a few weeks after the publication of its review, the Cochrane team was downloading thousands of pages of study files.

One publication of a Tamiflu trial was seven pages long. The corresponding clinical study report was 8,545 pages.

Clinical study reports are massive compilations of trial documents used by regulators to make approval decisions. Doshi says he had never heard of, let alone worked with, a clinical study report. “This is how in the dark most researchers are on the forms of data there are. Most people think if you want to know what happened in a trial, you look in the New England Journal of Medicine or JAMA.”

And in fact, that is how many meta-analyses or systematic reviews of drugs are done. As publications amass, independent analysts gather up the results and publish their own findings. At times they might include unpublished results offered by the trial investigators, from the US Food and Drug Administration’s website, or from conference abstracts or other “grey literature,” but for the most part, they rely simply on publications in peer-reviewed journals. Such reviews are valuable to clinicians and health agencies for recommending treatment. But as several recent studies illustrate, they can be grossly limited and misleading.

Doshi and his colleagues began poring over the reams of information from Roche, and realized that not only had their own previous reviews of Tamiflu relied on an extremely condensed fraction of the information, but that what was missing was actually important. For instance, they found that there was no standard definition of pneumonia, says Tom Jefferson of the Cochrane Collaboration and lead author of the 2009 review. And among people who had been infected with influenza, it appeared that the placebo and treatment groups were not on equal footing. “We realized that all of these [analyses] led to misleading results because the treatment groups [were] not comparable for that subpopulation,” Doshi says.

In January of this year, the group published its latest review of Tamiflu, which included the unpublished evidence obtained from Roche in 2009.[2. T. Jefferson et al., “Neuraminidase inhibitors for preventing and treating influenza in healthy adults and children,” Cochrane Database of Systematic Reviews, Issue 1, 2012.] The authors concluded that Tamiflu falls short of claims—not just that it ameliorates flu complications, but also that the drug reduces the transmission of influenza. In an e-mail sent to The Scientist, Roche says the Cochrane review was not limited to people who had laboratory-confirmed flu, but encompassed people with influenza-like symptoms, thereby possibly underestimating Tamiflu’s efficacy. “Independent and eminent scientists reviewed data from the Tamiflu trials, the inception and design of the studies which produced the data, and the assumptions made,” the company states. “Roche stands behind the robustness and integrity of our data supporting the efficacy and safety of Tamiflu.”

Jefferson is not convinced, and the experience has made him rethink his approach to systematic review, the Cochrane method of evaluating drugs. For 20 years, he has relied on medical journals for evidence, but now he’s aware of an entire world of data that never sees the light of publication. “I have an evidence crisis,” he says. “I’m not sure what to make of what I see in journals.” He offers an example: one publication of a Tamiflu trial was seven pages long. The corresponding clinical study report was 8,545 pages.

“It just blows the mind,” says Doshi. “A trial’s an extraordinarily complex process, and what we see in the published literature is an extreme synthesis of what goes on.” The big question is: What does that mean for the validity of independent reviews?

PUSHART

Unpublished data—Is it all bad news?

Clinical study reports like those provided by Roche are the most comprehensive descriptions of trials’ methodology and results, says Doshi. They include details that might not make it into a published paper, such as the composition of the placebo used, the original protocol and any deviations from it, and descriptions of all the measures that were collected.

But even clinical study reports include some level of synthesis. At the finest level of resolution are the raw, unabridged, patient-level data. Getting access to either set of results, outside of being trial sponsors or drug regulators, is a rarity. Robert Gibbons, the director of the Center for Health Statistics at the University of Chicago, had never seen a reanalysis of raw data by an independent team until a few years ago, when he himself was staring at the full results from Eli Lilly’s clinical trials of the blockbuster antidepressant Prozac.

FDA, time, Gibbons had questioned the belief that antidepressants are linked to an increased risk of suicide. Previous meta-analyses by independent reviewers on suicidal thoughts and behaviors among people taking the drugs had for the most part relied on summary data, Gibbons says. At a meeting at the Institute of Medicine a few years ago, Gibbons spoke with a senior investigator at Eli Lilly and brought up the idea of doing a full workup of the original data.

If there is some lid put on some aspects of those trials, that is frustrating one important goal of research, which is sharing information.—SIDNEY WOLFE, PUBLIC CITIZEN

Much to his surprise, shortly after the meeting Gibbons was in possession of the numbers. “We haven’t seen anybody get these kinds of data,” he says. He decided to push his luck. Gibbons had served as an expert witness for Wyeth, and he approached attorneys for the pharmaceutical company to ask if they would also share data from trials of the company’s antidepressant Effexor. “They got back to me, and they were agreeable to provide all their adult data,” he recalls.

Gibbons and his colleagues went to work reanalyzing the data. “Everything was exquisitely well documented,” he says. The raw data allowed them to take into account each person’s depression severity and to determine individual outcomes rather than averages. Their results, published earlier this year, ended up bucking much of the published literature on antidepressants.[3. R.D. Gibbons et al., “Suicidal thoughts and behavior with antidepressant treatment: Reanalysis of the randomized placebo-controlled studies of fluoxetine and venlafaxine,” Archives of General Psychiatry, online February 6, 2012.],[4. R.D. Gibbons et al., “Benefits from antidepressants: Synthesis of 6-week patient-level outcomes from double-blind placebo-controlled randomized trials of fluoxetine and venlafaxine,” Archives of General Psychiatry, online March 5, 2012.] For one, they found no link between Prozac and suicide risk among children and young adults. And secondly, they found that Prozac appeared to be more effective in youth, and antidepressants far less efficacious in the elderly, than previously thought. “I think these kinds of analyses and the discrepancies in the findings are good reason to be concerned about our reliance on traditional meta-analyses,” Gibbons says.

Although some of his results reflect negatively on the drugs, others are clearly very positive. There’s been an understanding for some time that publication bias is a real occurrence, and that it often favors the drug. Trials that show no efficacy are less likely to get into print than trials that demonstrate a positive effect.[5. K. Lee et al., “Publication of clinical trials supporting successful new drug applications: A literature analysis,” PLoS Medicine, September 2008.] So when Lisa Bero at the University of California, San Francisco, decided to redo 42 published meta-analyses of drugs and include unpublished, but available, data, she suspected the drugs would fare poorly. “But that’s not what we found,” she says.

She and her colleagues analyzed nine drugs using unpublished data from the FDA. For any approved drug, the agency makes available a summary of data used to vet the medication. When Bero’s group added these data to the meta-analyses, they found that all but three turned out to have a different result. Nineteen of the redone analyses showed a drug to be more efficacious, while 19 found a drug to be less efficacious.[6. B. Hart et al., “Effect of reporting bias on meta-analyses of drug trials: Reanalysis of meta-analyses,” BMJ, 344:d7202, 2012.] The one harm analysis that was reanalyzed showed more harm from the drug than had been reported. “We showed data that make a difference are not being published,” Bero says.

While the FDA’s summaries of trial data are available to any researcher, they’re not necessarily easy to work with, and often researchers don’t include them in meta-analyses. “I think the FDA reports are an extremely valuable data source, but they’re not the full application [for drug approval], and they have redacted parts,” Bero says. She’s found that potentially important elements, such as patient characteristics or conflict-of-interest information, have been blacked out. The quality of the PDFs can also be poor, with crooked pages or light print; and sometimes there is no index for a document hundreds of pages long.

Such data files are quite different from the quality of the documents Gibbons was able to work with. While he urges independent researchers to try to access raw data, he notes that “getting all the data is not a trivial problem.”

Why aren’t the data shared?

Although summaries of clinical trials are available from the FDA, unabridged clinical study reports or the raw data are hard to come by. Keri McGrath Happe, the communications manager at Lilly Bio-Medicines, wrote in an e-mail to The Scientist that the company has a committee that reviews requests to obtain unpublished clinical trial results. “I can tell you that it is not common” to have a request filled for raw data, she says. “Granting access to raw data isn’t as easy as opening file cabinets and handing over documents. A team has to go through each piece of data to find what specific data [are] needed to fulfill the request.”

f being an administrative burden, handing over clinical reports or raw data is considered hazardous to the integrity of a drug’s worth. “The simple truth is that drug discovery is enormously expensive,” says Jeff Francer, the assistant general counsel of the Pharmaceutical Research and Manufacturers of America (PhRMA). “In order for companies to engage in the immensely capital-intensive work to develop a medicine, there has to be some protection of the intellectual property. And the intellectual property is the trial data.”

The FDA tends to concur. The agency receives much more information about a drug than it ever releases. According to Patricia El-Hinnawy, an FDA public affairs officer, “as a matter of law and regulation, patient-level clinical trial data has been historically regarded as confidential commercial and/or trade secret information.”

The other route to obtaining unpublished results is through a Freedom of Information Act (FOIA) request, but just as with putting in a request to a company, there is no guarantee that the information will be released. Plus, “FOIA requests take a long time,” says Michelle Mello, a professor of law and public health at the Harvard School of Public Health. “In a world where we’re concerned about being able to rapidly assess certain safety signals, this is not a route to producing timely information.”

Gibbons says his studies on antidepressants make a strong case for greater data sharing. The other argument, says Sidney Wolfe, director of the health research group at the advocacy organization Public Citizen, is that “it’s a moral and ethical thing too. People who are participating in clinical trials, aside from whatever possible benefit will happen to them . . . are doing it for the benefit of humanity. And if there is some lid put on some aspects of those trials, that is frustrating one important goal of research, which is sharing information.”

The question of whether results from human experiments are private information or a public good has been debated for some time. In 2010, the European Medicines Agency (EMA), the European Union’s equivalent of the FDA, finally made a decision. “We had resolved that clinical data is not commercial confidential,” says Hans-Georg Eichler, the EMA’s senior medical officer. “It doesn’t belong to a company, it belongs to the patient who was in the trial.”

PUSHART

Efforts to increase data sharing

The EMA’s new policy is that if someone requests data from clinical trials of an approved medication, the agency will provide it. Doshi’s group took advantage of this to obtain about 25,000 pages of information on Tamiflu, which they used for their 2012 Cochrane update.2

Eichler says there have only been a handful of requests to date, too few to know how the policy is working out. Fulfilling such requests can be cumbersome, he says. It takes time to carefully review the data and make sure patients cannot be identified. Eichler adds that in the future he’d like to see a system where all clinical trial results are entered into a system accessible by other researchers.

Under the FDA Amendments Act of 2007, the agency requires trial sponsors to post the summary results of registered trials on clinicaltrials.gov within one year of completing the trial. But few comply. A recent survey of the website found that of 738 trials that should have fallen within the mandate, just 163 had reported their results.[7. A.P. Prayle et al., “Compliance with mandatory reporting of clinical trial results on ClinicalTrials.gov: Cross sectional study,” BMJ, 344:d7373, 2012.] In a statement sent to The Scientist, Congressman Henry Waxman (D-CA) says, “I was alarmed by the recent studies showing that compliance with this law has been sorely lacking and that industry is not reporting the required study results.”

While companies are certainly part of the problem in this case, they were actually more likely to report results than were researchers whose clinical trials had no industry backing, but were funded by foundation or government money. “I think it’s so important to acknowledge that is a huge problem throughout” the clinical research enterprise, says Kenneth Getz, a professor at Tufts Center for the Study of Drug Development. And industry has made some moves to be more proactive about sharing data.

Last year, the medical device company Medtronic agreed to share all of its original data regarding Infuse, a bone growth product that had been facing considerable skepticism about its efficacy. Yale professor Harlan Krumholz approached the company with a challenge: if Medtronic thinks the Infuse data can stand up to external scrutiny, then let an external group have a look. The company agreed, and a Yale University group serves as the middleman between the company and the independent reviewers.

Joseph Ross, a Yale Medical School professor who’s involved in the project, says two review teams have been selected, and they should have results by the summer. Medtronic is paying $2.5 million for the external reviews, a price Ross says is small compared to what gets invested in—and ultimately earned from—a successful drug. He says it’s the first experiment of its kind. “In my most optimistic moments I think it has to be the way of the future. I don’t think the public realized that this data isn’t available for everybody to understand,” says Ross. “In my most pessimistic moments, this only happens one other time—when a company gets in hot water.”

Journals are also lighting a fire under trial sponsors to provide their results to independent reviewers more quickly and completely. In 2005, the International Committee of Medical Journal Editors initiated a requirement that trials had to be registered, say on clinicaltrials.gov, in order to be published. “That sent shock waves,” says Elizabeth Loder, an editor at the British Medical Journal.

Since then, Loder’s own publication has been digging into the effects of unpublished data. She says the BMJ asks independent reviewers and meta-analysts to what extent they tried to obtain unpublished results for their studies. And this January, the journal published a special issue of reports dedicated to “missing” clinical trial data.[8. R. Lehman and E. Loder, “Missing clinical trial data,” BMJ, 344:d8158, 2012.] “I suppose you could say that publishing the original [2009] report on Tamiflu, we were newly sensitized to the dangers,” says Loder. “I think we wanted to keep everybody focused on that problem.”

For a special issue next year, Loder says BMJ is going to look at what exactly is the harm of having used incomplete data sets for so many meta-analyses and systematic reviews over the years. “Even though going forward new requirements for posting study results will probably improve the situation, we remain concerned about previously done studies that are unpublished and unavailable, and how that might affect the existing evidence base.”

While Getz agrees that more data could improve meta-analyses, he cautions against “data dumping”—completely opening the floodgates to unpublished results. “I think just the idea of making more information available misses the point. You reach a level of data overload that makes it very hard to draw meaningful and reasonable conclusions, unless you’re an expert with a lot of time.”

But Cochrane Collaboration’s Jefferson says bring it on. While the clinical study reports he received numbered in the thousands of pages, they were still incomplete. Roche says it provided as much as the researchers needed to answer their study questions. But accepting that response would require a trust that is clearly eroded. “We hold in the region of 30,000 pages. That’s not a lot,” Jefferson says. “We don’t know what the total is. We’re looking at the tip of the iceberg and we don’t know what’s below the waterline.”

Add a Comment

Comments

"You reach a level of data overload that makes it very hard to draw meaningful and reasonable conclusions, unless youâ€™re an expert with a lot of time"

Sorry, but I wholly disagree. The whole point of such analysis is that it be correct. If that requires a lot of work - that's what it requires.Â

The corrollary is that it is better to do what is easy - but probablyÂ wrong.Â Why? To get publications that will result in tenure, when those publications are little more than fancy, long-winded fantasies?Â That is utterly indefensible in my view.Â

We are talking about a profession that purports to inform medicine with "evidence based medicine" that is "Scientific" and "more accurate".Â Related branches have sentenced millions of young children to death by declaring the risk of a vaccine like Rotateq unacceptable due to differential levels of intussuception that any reasonable person would ascribe to chance.

NowÂ we have a major player arguing that we should accept hisÂ wrong analyses just because it's easier? Blah, blah, etc.

One sentence in this article took my breath away: "...there has to be some protection of the intellectual property. And the intellectual property is the trial data.â€쳌Â (OK, two sentences.)Â Â

In other words, the company owns the truth about the effects of their products, andÂ has the right toÂ control it.Â Since we are talking about data that has already been published in summary form, it is tantamount to claiming a right toÂ control the truth selectively.Â Â

I have no doubt that this is something that companies would like to be able to do, since the whole truth may diminish the value their ownership of a product.Â I'm just surprised to hearÂ it said out loud.Â

"You reach a level of data overload that makes it very hard to draw meaningful and reasonable conclusions, unless youâ€™re an expert with a lot of time"

Sorry, but I wholly disagree. The whole point of such analysis is that it be correct. If that requires a lot of work - that's what it requires.Â

The corrollary is that it is better to do what is easy - but probablyÂ wrong.Â Why? To get publications that will result in tenure, when those publications are little more than fancy, long-winded fantasies?Â That is utterly indefensible in my view.Â

We are talking about a profession that purports to inform medicine with "evidence based medicine" that is "Scientific" and "more accurate".Â Related branches have sentenced millions of young children to death by declaring the risk of a vaccine like Rotateq unacceptable due to differential levels of intussuception that any reasonable person would ascribe to chance.

NowÂ we have a major player arguing that we should accept hisÂ wrong analyses just because it's easier? Blah, blah, etc.

One sentence in this article took my breath away: "...there has to be some protection of the intellectual property. And the intellectual property is the trial data.â€쳌Â (OK, two sentences.)Â Â

In other words, the company owns the truth about the effects of their products, andÂ has the right toÂ control it.Â Since we are talking about data that has already been published in summary form, it is tantamount to claiming a right toÂ control the truth selectively.Â Â

I have no doubt that this is something that companies would like to be able to do, since the whole truth may diminish the value their ownership of a product.Â I'm just surprised to hearÂ it said out loud.Â

The difference (in volume, format, detail, and so on) between raw data and published result is typically no problem in any area of sciencce with whose workings I am familiar. Data reduction, statistical assessment, and so on are simply part of the long road between lab floor and peer reviewed publication, and experienced researchers are quite familiar with that road and traverse it routinely. The problem in case of the pharmaceutical industry is of course that as commercial agents their interests are different from those of scientists doing basic research, and for that reason a huge regulatory apparatus has been erected to "keep them honest" regarding the claims they make in order not to endanger their customers. In Sweden the documentation for certification of a new medication amounts to a pick-up truck full of folders. The process of producing that truck-load of folders through clinical trials imposes a huge financial burden on the pharmaceutical company, which ends up in the price of the medication on the market. Why not lift the burden of assessing efficacy from the sholders of the pharmaceutical companies by letting the certifying government agency conduct the clinical trials? That would eliminate the conflict of interest aspect, the clinical trials would revert to the status of basic research rather than obstacle course (presumably simplifying the process), and the customers would pay for the safety of their medication through their income tax rather than at the drug-store counter.

The difference (in volume, format, detail, and so on) between raw data and published result is typically no problem in any area of sciencce with whose workings I am familiar. Data reduction, statistical assessment, and so on are simply part of the long road between lab floor and peer reviewed publication, and experienced researchers are quite familiar with that road and traverse it routinely. The problem in case of the pharmaceutical industry is of course that as commercial agents their interests are different from those of scientists doing basic research, and for that reason a huge regulatory apparatus has been erected to "keep them honest" regarding the claims they make in order not to endanger their customers. In Sweden the documentation for certification of a new medication amounts to a pick-up truck full of folders. The process of producing that truck-load of folders through clinical trials imposes a huge financial burden on the pharmaceutical company, which ends up in the price of the medication on the market. Why not lift the burden of assessing efficacy from the sholders of the pharmaceutical companies by letting the certifying government agency conduct the clinical trials? That would eliminate the conflict of interest aspect, the clinical trials would revert to the status of basic research rather than obstacle course (presumably simplifying the process), and the customers would pay for the safety of their medication through their income tax rather than at the drug-store counter.

Forgive me if what I am about to say is a shocker.Â Mention of huge changes in thinking, take time to soak in, so I ask you, fellow reader, to try to avoid a knee-jerk response to it, and let it fester in your thinking for a while... Please.

The time has come when "artificial judgment" may be entering the same arena as artificial intelligence.Â And, management of scientifically derived data classification, organization, storage, retrieval... and "data-mining" may be on the threshold of being managed better by machines than humans can handle them.

Please don't choke on this prospect.

I know we've never done it this way before.Â I know artifical intelligence machines have never been capable of anything resembling "human judgment" before, much less applying "human values" to processing data.Â But...Â Never before the 1980s did we have technology which could beat a world chess champion.Â Never before WATSON -- the current state-of-the-art IBM computer system, with recent tweaks, been able to beat the winningest contestants on the Jeapordy Show.

I know.Â I know.Â That is sensationalist stuff.Â But if cloud computing has been developed to a capability of processing in minutes data that would take a hundred human math geniuses a hundred years to work through, that, too, is sensational.So superior is the mathematical capability of computers, that it would be laughable to propose a staged contest to entertain a TV audience with that capability.Â Computer technology is now far and away so capable of winning, and by so great a margin of time and accuracy, that we would not even broach such a contest.

A brief, simplistic account of how WATSON won a Jeapordy Show contest is inadequate for purposes of evaluating how, say, ten years from today, or a year from today, WATSON-like programming could be further developed to process millions, even billions, of research data, applying every criterion publishers now apply to what papers make the cut to distribution and (as indicated in the subject article) what reductions of words and figures and graphs and such make the cut to being presented in lieu of the enormous entireties of information in the actual papers represented.Â In fact, machines could take into account millions (even billions) more quantifications and qualifications virtually all-at-once (actually one at a time, but at a rate approaching the speed of light in a vacuum) than any human publisher, or room full of human publishers, could consider (virtually all-at-once)... ever.

I am not arguing here that that WATSON-like technology has come to a state of being applicable now.Â I do perceive, and wish to argue however, that it has been brought to a stage at which we should do more than simply speculate about its applicability to being developed for potential use by wise, objective publishers who would vastly increase their ability to do what they do both quantitatively AND qualitatively.Â Human judgment and human values and, indeed, anthropocentric self-perpetuating and survival biases are not translatable into machine language or machine logic operations as yet.Â (Whether they ever shall be, let only the bravest even wonder; but my point here is about now, and the foreseeable future, and dealing with things as we humans have found fit and been able to bring them along to our advantages real, or our advantages perceived, as the case may be).

All the above considered, let me urge any who have mastered one field, or preferably at least two fields of modern science and (hopefully grasp the issues and argumentations of philosophy of science) will dare to undertake one more challenge:Â that of learning HOW the developers of WATSON got that machine to parallel the dynamics of human judgment in "learning" to distinguish between language nuances of denotation versus connotation, not merely for the English language as spoken in the U. S. but, also, for multiple other languages, even down to regional variances... as well as dealing with millions of FACTÂ SAMPLES.

We could call such things as broad ranges of facts "trivia," if we wish.Â But the idea of what is trivial is like the idea of a weed.Â No fact, in an assessment of an actually occurring phenomenon, or an actually occurring statistical propensity, or an actual consensus among scientists, or anomolies which tend to refute a current synthesis... is trivial.Â And, importantly, neither is any detail too much "trouble" for a machine such as WATSON to take into account, by virtue of being more than any human or room full of humans, has the man-hours to consider.

Let none of us reject the idea of including the study of how WATSON-like technology has achieved what it has achieved.Â And then -- not from the standpoint of how we have always done things up to now, and not on grounds of principle -- make a realistic extimate of how machines can assist in reducing out of much volume of data the summations that are optimum.Â And let us begin thinking in terms of just how far machines have been brought whereby we might ask computers to assess whether we (or more particularly, whether publishers...) could apply our wonderful human judgment to getting a machine, perhaps, to enable us to do that magic better.Â And let us consider how current data banks, whereby one field of science inputs data using one set of standards as to what is important in our own field, while those in another field input data we could profit from taking into account if we first knew it existed -- and which artifical "judgment" might find for us by use of the same kinds of nuance comparisons as WATSON now uses to deal with local, regional, and national idioms of language.Â Surely we recognize similarities in the kinds of "field-specific esoteric nuances of word and relativity of valuation.Â

Please think about this and do not merely shrug it off.

If the time for machine-aiding of judgment and management and retrieval in respect to scientific data, and its publication, is not upon it.Â It looms large in our future... perhaps even in our near-term future.Â Â

Isn't it interesting how considerations of ownership and exploitation of information muddy the waters of scientific progress.

There is much theoretical impetus today, to the notion that the profit motive, and especially personal opportunism and greed, are drivers of good economic health and social health.Â I would not deny for a moment that such belief accords with the need to raise human children, and educate students, to live in the present socio-econo-philosopho-political dynamic which is... and which, therefore, is the REALITY in which all of us live today -- and which we all are obliged to train our youth to survive in.

Whether there is any other potentially viable way for us to get along with one another, or shape thinking about society, economics, philosophical conjectures and political (including legal and regulatory) policies... let imagination be our best place to consider.

But, all the perceived "real" world and "necessary that it be this way" world, taken as it is, and pondering the perception that personal greed and litigiousness in our workings out of how we humans get along individually and as a species are drivers of "progress," let us think farther into what we would mean by "progress."Â

In the progress of scientific acquisition of information. and the utilization of that scientific information, is it not accurate to say that the progress, at least, of UTILIZING newly acquired information is IMPEDED by the treatment of that information as "property."

Again, I am not proposing there is a better way.Â After all, many incentives to attain more information of scientific value are profit-oriented ones.

But, in a perfect world, with perfect humans and a perfect economy and legal system. would it not be wonderful to have a system of data sharing and data usage that would not be incumbered by ownership issues.Â

Isn't it interesting how considerations of ownership and exploitation of information muddy the waters of scientific progress.

There is much theoretical impetus today, to the notion that the profit motive, and especially personal opportunism and greed, are drivers of good economic health and social health.Â I would not deny for a moment that such belief accords with the need to raise human children, and educate students, to live in the present socio-econo-philosopho-political dynamic which is... and which, therefore, is the REALITY in which all of us live today -- and which we all are obliged to train our youth to survive in.

Whether there is any other potentially viable way for us to get along with one another, or shape thinking about society, economics, philosophical conjectures and political (including legal and regulatory) policies... let imagination be our best place to consider.

But, all the perceived "real" world and "necessary that it be this way" world, taken as it is, and pondering the perception that personal greed and litigiousness in our workings out of how we humans get along individually and as a species are drivers of "progress," let us think farther into what we would mean by "progress."Â

In the progress of scientific acquisition of information. and the utilization of that scientific information, is it not accurate to say that the progress, at least, of UTILIZING newly acquired information is IMPEDED by the treatment of that information as "property."

Again, I am not proposing there is a better way.Â After all, many incentives to attain more information of scientific value are profit-oriented ones.

But, in a perfect world, with perfect humans and a perfect economy and legal system. would it not be wonderful to have a system of data sharing and data usage that would not be incumbered by ownership issues.Â

Forgive me if what I am about to say is a shocker.Â Mention of huge changes in thinking, take time to soak in, so I ask you, fellow reader, to try to avoid a knee-jerk response to it, and let it fester in your thinking for a while... Please.

The time has come when "artificial judgment" may be entering the same arena as artificial intelligence.Â And, management of scientifically derived data classification, organization, storage, retrieval... and "data-mining" may be on the threshold of being managed better by machines than humans can handle them.

Please don't choke on this prospect.

I know we've never done it this way before.Â I know artifical intelligence machines have never been capable of anything resembling "human judgment" before, much less applying "human values" to processing data.Â But...Â Never before the 1980s did we have technology which could beat a world chess champion.Â Never before WATSON -- the current state-of-the-art IBM computer system, with recent tweaks, been able to beat the winningest contestants on the Jeapordy Show.

I know.Â I know.Â That is sensationalist stuff.Â But if cloud computing has been developed to a capability of processing in minutes data that would take a hundred human math geniuses a hundred years to work through, that, too, is sensational.So superior is the mathematical capability of computers, that it would be laughable to propose a staged contest to entertain a TV audience with that capability.Â Computer technology is now far and away so capable of winning, and by so great a margin of time and accuracy, that we would not even broach such a contest.

A brief, simplistic account of how WATSON won a Jeapordy Show contest is inadequate for purposes of evaluating how, say, ten years from today, or a year from today, WATSON-like programming could be further developed to process millions, even billions, of research data, applying every criterion publishers now apply to what papers make the cut to distribution and (as indicated in the subject article) what reductions of words and figures and graphs and such make the cut to being presented in lieu of the enormous entireties of information in the actual papers represented.Â In fact, machines could take into account millions (even billions) more quantifications and qualifications virtually all-at-once (actually one at a time, but at a rate approaching the speed of light in a vacuum) than any human publisher, or room full of human publishers, could consider (virtually all-at-once)... ever.

I am not arguing here that that WATSON-like technology has come to a state of being applicable now.Â I do perceive, and wish to argue however, that it has been brought to a stage at which we should do more than simply speculate about its applicability to being developed for potential use by wise, objective publishers who would vastly increase their ability to do what they do both quantitatively AND qualitatively.Â Human judgment and human values and, indeed, anthropocentric self-perpetuating and survival biases are not translatable into machine language or machine logic operations as yet.Â (Whether they ever shall be, let only the bravest even wonder; but my point here is about now, and the foreseeable future, and dealing with things as we humans have found fit and been able to bring them along to our advantages real, or our advantages perceived, as the case may be).

All the above considered, let me urge any who have mastered one field, or preferably at least two fields of modern science and (hopefully grasp the issues and argumentations of philosophy of science) will dare to undertake one more challenge:Â that of learning HOW the developers of WATSON got that machine to parallel the dynamics of human judgment in "learning" to distinguish between language nuances of denotation versus connotation, not merely for the English language as spoken in the U. S. but, also, for multiple other languages, even down to regional variances... as well as dealing with millions of FACTÂ SAMPLES.

We could call such things as broad ranges of facts "trivia," if we wish.Â But the idea of what is trivial is like the idea of a weed.Â No fact, in an assessment of an actually occurring phenomenon, or an actually occurring statistical propensity, or an actual consensus among scientists, or anomolies which tend to refute a current synthesis... is trivial.Â And, importantly, neither is any detail too much "trouble" for a machine such as WATSON to take into account, by virtue of being more than any human or room full of humans, has the man-hours to consider.

Let none of us reject the idea of including the study of how WATSON-like technology has achieved what it has achieved.Â And then -- not from the standpoint of how we have always done things up to now, and not on grounds of principle -- make a realistic extimate of how machines can assist in reducing out of much volume of data the summations that are optimum.Â And let us begin thinking in terms of just how far machines have been brought whereby we might ask computers to assess whether we (or more particularly, whether publishers...) could apply our wonderful human judgment to getting a machine, perhaps, to enable us to do that magic better.Â And let us consider how current data banks, whereby one field of science inputs data using one set of standards as to what is important in our own field, while those in another field input data we could profit from taking into account if we first knew it existed -- and which artifical "judgment" might find for us by use of the same kinds of nuance comparisons as WATSON now uses to deal with local, regional, and national idioms of language.Â Surely we recognize similarities in the kinds of "field-specific esoteric nuances of word and relativity of valuation.Â

Please think about this and do not merely shrug it off.

If the time for machine-aiding of judgment and management and retrieval in respect to scientific data, and its publication, is not upon it.Â It looms large in our future... perhaps even in our near-term future.Â Â

I write to let you know that you are not alone in your thinking and to note that people who refuse to consider what is about to happen in this and related areas will suddenly find themselves playing catch-up with the rest of the world.

I have frequently wondered if a machine similar to Watson might be able to perform the first pass diagnostic activities of my local doctor. Â A lot of medicine today is of the form 1 - Examine patient. Â 2 - Compare symptoms to a predefined list of illnesses 3 - Select the closest match Â 4 - write a prescription for the proper remedy. Â 5 -If there is no match, perform some logical test and repeat from step 1. Â

With the looming shortage of primary care doctors and the increasing complexity of knowledge about diseases and their cures, I feel we have already passed the point were the local primary care doctor has the resources and time to reach a reasonable conclusion in complicated cases.

Before I am hung from the highest yard arm for making such a suggestion, it only takes a few minutes to look at Â the long list of new research reported each day to see that no human can keep up with the reading much less the assimilation of this mountain of data into useful information.

Perhaps the technology is not yet ready, but like you, I see it as very close. Â Implementation will be hard. Â Does a machine get the same reimbursement as a real doctor? Â As a physicians assistant? Â How about liability? Do you sue the machine, the doctor that owns the machine, or the maker of the machine? Â Yes there are questions, but they are not without answers and the future is coming fast in this area as in the research arena you so eloquently described.

I write to let you know that you are not alone in your thinking and to note that people who refuse to consider what is about to happen in this and related areas will suddenly find themselves playing catch-up with the rest of the world.

I have frequently wondered if a machine similar to Watson might be able to perform the first pass diagnostic activities of my local doctor. Â A lot of medicine today is of the form 1 - Examine patient. Â 2 - Compare symptoms to a predefined list of illnesses 3 - Select the closest match Â 4 - write a prescription for the proper remedy. Â 5 -If there is no match, perform some logical test and repeat from step 1. Â

With the looming shortage of primary care doctors and the increasing complexity of knowledge about diseases and their cures, I feel we have already passed the point were the local primary care doctor has the resources and time to reach a reasonable conclusion in complicated cases.

Before I am hung from the highest yard arm for making such a suggestion, it only takes a few minutes to look at Â the long list of new research reported each day to see that no human can keep up with the reading much less the assimilation of this mountain of data into useful information.

Perhaps the technology is not yet ready, but like you, I see it as very close. Â Implementation will be hard. Â Does a machine get the same reimbursement as a real doctor? Â As a physicians assistant? Â How about liability? Do you sue the machine, the doctor that owns the machine, or the maker of the machine? Â Yes there are questions, but they are not without answers and the future is coming fast in this area as in the research arena you so eloquently described.

Its unfortunate that Robert Gibbons work is included in an article that celebrates the work of Peter Doshi and Tom Jefferson. Â Unfortunate not because he denies the risks of SSRIs whereas I think these risks are all too real. Â This latter is an issue that most readers of this journal will feel is not for them to adjudicate on - it will appear to most readers that this is a scientific dispute that might best be decided in fact by data access. Â Unfortunate because Doshi and Gibbons are at opposite ends of the data access spectrum.

Peter Doshi and Tom Jefferson in one sense probably don't care what the data on Tamiflu show and might even be relieved if they showed it worked better than now seems, as I would be if the data on SSRIs showed them to be freer of risks and more effective than the data in fact do show.Â

Doshi and Jefferson argue that without access we cannot resolve issues - as I do. Â They have had access but Robert Gibbons portrayed here as having access hasn't had - in the sense that he cannot share his data with anyone and so we are left trusting him or Lilly or Wyeth. Â This is quite different to Doshi and Jefferson who can share their data with those taking an opposite point of view who can then challenge them in public on what the data in fact show.Â On the question of access, Dr Gibbons seems as far removed from Doshi and Jefferson as it is possible to get, and more closely linked to their critics who presumably also claim that Roche have given them data access.Â

Whether or not there is access to data is not something scientists have led the way on - its been journalists. Â In this case is as likely to be readers of this article, the author and editors, who have no training on the issues linked to SSRI risks or benefits, who are likely to move this issue forward if it is to move forward.Â

Its unfortunate that Robert Gibbons work is included in an article that celebrates the work of Peter Doshi and Tom Jefferson. Â Unfortunate not because he denies the risks of SSRIs whereas I think these risks are all too real. Â This latter is an issue that most readers of this journal will feel is not for them to adjudicate on - it will appear to most readers that this is a scientific dispute that might best be decided in fact by data access. Â Unfortunate because Doshi and Gibbons are at opposite ends of the data access spectrum.

Peter Doshi and Tom Jefferson in one sense probably don't care what the data on Tamiflu show and might even be relieved if they showed it worked better than now seems, as I would be if the data on SSRIs showed them to be freer of risks and more effective than the data in fact do show.Â

Doshi and Jefferson argue that without access we cannot resolve issues - as I do. Â They have had access but Robert Gibbons portrayed here as having access hasn't had - in the sense that he cannot share his data with anyone and so we are left trusting him or Lilly or Wyeth. Â This is quite different to Doshi and Jefferson who can share their data with those taking an opposite point of view who can then challenge them in public on what the data in fact show.Â On the question of access, Dr Gibbons seems as far removed from Doshi and Jefferson as it is possible to get, and more closely linked to their critics who presumably also claim that Roche have given them data access.Â

Whether or not there is access to data is not something scientists have led the way on - its been journalists. Â In this case is as likely to be readers of this article, the author and editors, who have no training on the issues linked to SSRI risks or benefits, who are likely to move this issue forward if it is to move forward.Â

I really appreciate the work of this researcher mining all the specific data. The same problem definingÂ patient's condition occurred in a clinical trial I particpated in 1992 NCI SWOG 9005 Phase 3 Â Mifepristone for recurrent meningioma. The drug put my tumor in remission when it regrew post surgery. However, other more despairing patients had already been grossly weaken by multiple brain surgeries and prior standard brain radiation therapy which had failed them before they joined the trial.Â Â They were really not as young, healthy and strong as I was when I decided to volunteer for a "state of the art" drug therapy upon my first recurrence.Â Â Brain radiation scrambles the DNA so that the histological tumor type may change and become more aggressive over time if the tumor reoccurs again. This drug is supposed to block an overactive normal progesterone growth hormone receptor on my low grade brain tumor like it works to shrink benign uterine fibroids. I could not get the names of the anonymous members of the Data and Safety Monitoring committee who closed the trial as "no more effective than placebo". I had flunked the placebo the first year and my tumor did not grow for the next three years I was allowed to take the real drug. I finally managed to get FDA approval to take the drug again in Feb 2005 and my condition has remained stable ever since according to my MRIS.Â Selective truth is the only truth or reality we will ever haveÂ and its based on our own neurolinguistic experiences, memoriesÂ and personal background. Â I always like to question the truth of ruling authorities. Many smart people assume the way they do things is the only right way to do things. Â Â Â I say patient characteristics matter andÂ even scientific motives may beÂ for profit orÂ prohibition and/or religious supression. Look how we as a nation turned on homebrewing bootleggers during Prohibition. And now how we have all turnedÂ on evil smokers, while we are finally beginning to give gay people the same civil and legal personal rights othersÂ oftenÂ take for granted. Â I believe the drug mifepristone I take has continuously been legallyÂ suppressed by right wing conservativesÂ who do not want women to have independent control of their own fertility or be able to choose whether or not they want to be mothers with all the responsiblities of parenthood.TheÂ reason this morning after pill has not been fullyÂ developed for its other life saving purposes. ItsÂ nt politically popular because of abortion fears and beliefs. WhatsÂ the difference if we continue toÂ send ourÂ eager young men and women off to die fighting in foreign wars for mega corporate oil and energy profits?Powerful men want to be sure they will always have enough foot soldiers to control other people's behaviors so women must have more babies to grow up as pawns usedÂ inÂ male war games on a global scale with extensive collateral warÂ Â Â damage among women and children.Â Â Please watch out, I might be an evil feminist now. Please forgive me guys, I trustÂ God will understand myÂ independent thinking about his big picture

I really appreciate the work of this researcher mining all the specific data. The same problem definingÂ patient's condition occurred in a clinical trial I particpated in 1992 NCI SWOG 9005 Phase 3 Â Mifepristone for recurrent meningioma. The drug put my tumor in remission when it regrew post surgery. However, other more despairing patients had already been grossly weaken by multiple brain surgeries and prior standard brain radiation therapy which had failed them before they joined the trial.Â Â They were really not as young, healthy and strong as I was when I decided to volunteer for a "state of the art" drug therapy upon my first recurrence.Â Â Brain radiation scrambles the DNA so that the histological tumor type may change and become more aggressive over time if the tumor reoccurs again. This drug is supposed to block an overactive normal progesterone growth hormone receptor on my low grade brain tumor like it works to shrink benign uterine fibroids. I could not get the names of the anonymous members of the Data and Safety Monitoring committee who closed the trial as "no more effective than placebo". I had flunked the placebo the first year and my tumor did not grow for the next three years I was allowed to take the real drug. I finally managed to get FDA approval to take the drug again in Feb 2005 and my condition has remained stable ever since according to my MRIS.Â Selective truth is the only truth or reality we will ever haveÂ and its based on our own neurolinguistic experiences, memoriesÂ and personal background. Â I always like to question the truth of ruling authorities. Many smart people assume the way they do things is the only right way to do things. Â Â Â I say patient characteristics matter andÂ even scientific motives may beÂ for profit orÂ prohibition and/or religious supression. Look how we as a nation turned on homebrewing bootleggers during Prohibition. And now how we have all turnedÂ on evil smokers, while we are finally beginning to give gay people the same civil and legal personal rights othersÂ oftenÂ take for granted. Â I believe the drug mifepristone I take has continuously been legallyÂ suppressed by right wing conservativesÂ who do not want women to have independent control of their own fertility or be able to choose whether or not they want to be mothers with all the responsiblities of parenthood.TheÂ reason this morning after pill has not been fullyÂ developed for its other life saving purposes. ItsÂ nt politically popular because of abortion fears and beliefs. WhatsÂ the difference if we continue toÂ send ourÂ eager young men and women off to die fighting in foreign wars for mega corporate oil and energy profits?Powerful men want to be sure they will always have enough foot soldiers to control other people's behaviors so women must have more babies to grow up as pawns usedÂ inÂ male war games on a global scale with extensive collateral warÂ Â Â damage among women and children.Â Â Please watch out, I might be an evil feminist now. Please forgive me guys, I trustÂ God will understand myÂ independent thinking about his big picture

This approach is welcome and long overdue -- and should be the norm for published treatment reviews. I've occasionally been hired to attempt "rescue" of a result from badly controlled medical data.

My own research and professional interests are generally in environmental monitoring, however, where "meta-analysis" means getting raw data from multiple studies and combining compatible data into one large set that has greater statistical power than any one individual study. As far as I'm concerned, what the medical folks call a meta-analysis is nothing more than a literature review dolled up with numbers. There's no increase in statistical power at all.Â

This approach is welcome and long overdue -- and should be the norm for published treatment reviews. I've occasionally been hired to attempt "rescue" of a result from badly controlled medical data.

My own research and professional interests are generally in environmental monitoring, however, where "meta-analysis" means getting raw data from multiple studies and combining compatible data into one large set that has greater statistical power than any one individual study. As far as I'm concerned, what the medical folks call a meta-analysis is nothing more than a literature review dolled up with numbers. There's no increase in statistical power at all.Â

Your comments are excellent.Â They provide, however, reasons only why a WATSON-like program should never be utilized with the expectation that it would do anything it cannot do.Â

An automobile should not be expected to drive you to work, until and unless an interactive, automated program were to be developed and implemented, for purposes of coordinating all traffic such that many individual motor vehicles were programmed to go from a point A to a point B, with optimum efficiency and safety for all concerned.Â The designing of such a complex "driving system" would seem prohibitively expensive, although current technology and expertise to build it, if money and labor hours were super-abundant, already exists.

Nothing I have written (I hope) has interpretable as suggesting that a WATSON-like program should be by machine logic, alone, "driven."

The "science" of medicine is neither perfected or perfectible.Â But neither are human knowledge, human cognition, nor human judgment.Â And it is doubtful that the smartest medical school graduate, nor the smartest post-school-experienced human physician or surgeon (or RN, or pharmacist, or medical first responder) is perfect or perfectible.Â

The best any human can do, unaided by the latest "wisdom" in medical science, together with the latest technology of medical science, vastly under performs what the smartest, most creative, most disciplined humans can do WITH THE WISE UTILIZATION OF the highest and best wisdom and tools.Â And to be even more general about this phenomenon, it can safely be said that humans without mankind's accumulated wisdom is as hamstrung as those tools are without mankind.

Surely we would not avoid relying upon a motorized wheel chair for a non-ambulatory patient on grounds that, unaided by human judgement (or a complex computerized system comparable to the automobile traffic kind of system cited above) it would either be immobile or would run indescriminately into walls and furnishings and equipment and any human who would fail to get out of its way.

Now, all that having been said, just a few of the things a WATSON-like system aid a wise user in doing better than, say, with a stone hatchet, include:

1.Â It could reduce the record-writing personnel time required of physicians and RNs, byenabling a checklist of items which should be performed, which could be checked off as completed.Â One of the leading causes of RN burnout today is the amount of time required doing paperwork.Â I won't go into how many reasons that paperwork is necessary, in absence of a faster, less time-intensive way of accomplishing all it serves to do.Â Suffice to say that the paperwork is non-expendable until and unless a less cumbersome system is provided;

2.Â Physicians must leave instructions for RNs, for pharmacists, for billing purposes...Â Ask any surgeon how frequently his instructions are misread or misinterpreted.Â Ask any physician or pharmacist how frequently a prescription is misread or misinterpreted.Â A WATSON-like program should not be expected to override a doctor's instructions or a doctor's prescription, but it should have no problem applying limits, such as the limits of what medications should not be given to a child, or an elderly patient, or to a patient allergic to it...Â Once such limits are fed into a logic system, any violation of such a limit should virtually instantaneously balk and ask for confirmation.Â If only a physician is qualified to confirm, then only the ordering physician or another physician designated by him if he cannot be contacted in a timely manner, should be required to alter or override the "machine limit objection";

3.Â No diagnostician is capable of considering each and every possible syndrome suggested by any one given set of symptoms.Â While 99% of the patients any given doctor sees may have one of a short list of syndromes, that is exactly the reason he may be unlikely to recognize and exception quickly.Â In the process of making a diagnosis, there are many things to consider.Â A single test result may rule out half the possible syndromes which could cause a sub-set of symptoms known.Â Another may rule out, say, 80% of those.Â Some acute symptoms need to be treated quickly on their own merits, but may negate the value of certain tests that need to be performed to further narrow the likelihoods.Â Some actions, therefore, not only must be performed quickly but, also, in a certain order, lest one test performed in the wrong order may obfuscate the results of another.Â One procedure in an emergency room may obviate the choice of another.Â Patients have a right to choose, too, what expense will be undergone, what heroics are not desired by the patient to prolong life in some situations.Â A WATSON-like system could be fed such information quickly, and certain procedures that neither a physician nor a patient would wish to be performed my -- in absence of certain permissions -- be advisable by hospital and doctors to reduce risk of being sued for NOT doing them.Â If something has to be signed to prevent such risk-aversion (though possibly desired by no one) protocols, and it is signed, and a sophisticated WATSON-like program could, upon being told a particular risk-aversion protocol is about to be performed, NOTIFY that both doctor and patient have signed off on avoiding it...

4.Â The above are merely representative and nowhere near exhaustive.Â Hundreds, or perhaps even thousands, of such "aids" to diagnosis, treatment, risk aversion, care... even diet issues, could be both sure to be considered, and dispensed, or avoided, quickly and "wisely."Â The list goes on...

Notice that I have preferred the word "wise" and its various forms above to "knowledge."Â Mankind does not have "knowledge."Â Mankind must muddle through on far less than all information about everything.Â Our cumulative "wisdom" is -- indeed has to be -- updated and upgraded and from time to time totally reversed or overthrown by new information and/or new syntheses that better rationalized reasons why an old synthesis has encountered so many anomalies.Â The best we have, therefore, is our best-educated, and best analyzed guesses.Â A WATSON-like system could only optimize the accumulated wisdom to any given date, and provided with the information that physician A might do one thing in a given scenario, while another may do another.Â The odds of the outcome of the one might better the odds of the other, given identical scenarios... but no two scenarios are ever precise duplicates.Â But a diagnostician or a surgeon could virtually instantly select one procedure over the other in such a case.

Machine logic and machine storing and retrieval of data exceed human capacity to store and "process" data to astronomical proportions.Â So far, machines must, in some "wise" way be steered in what they do.Â WATSON should not practice medicine.Â But doctors should not avoid any tool because it needs to be "human driven," either.Â If that were a criterion, even a stone ax would be off limits.Â Â Â Â

Quote: I am not arguing here that that WATSON-like technology has come to a state of being applicable now.

Answer: Will it ever , in medicine ? The problem being is , WatsonÂ is TOLD / programmed to believe Vioxx was good for you ? Watson would be as 'good as' the best doctor BUT only as good. Watson can NEVER get to say , "Vioxx is bad for you" ? ONLY that which is programmed inÂ Watson will ever be available to Watson. The WEIGHT of the information programmed such as Linus Paulings' weightÂ versus Dr. Christophers weight , versus the weight of Crick , alleged thief , or Lendon Smith , alleged criminal , Pasteur , alleged thief , etc ? Â If Watson could be 'turned loose' WITH the program which sorts and finds correlations in medical studies , data-mining , THEN Watson could reign supreme in Medicine. Until then , Watson , available as it is NOW should be able to be easily programmed to be a VERY good consultant to have in the hospital IF the proper records and questions asked of the patient ? One would / could easily have Ipads in the ICU available in any language to record all symptoms and previous problems , which combined with common medical tests should ascertain the problem in 99.9% of all people. Imho.

Your comments are excellent.Â They provide, however, reasons only why a WATSON-like program should never be utilized with the expectation that it would do anything it cannot do.Â

An automobile should not be expected to drive you to work, until and unless an interactive, automated program were to be developed and implemented, for purposes of coordinating all traffic such that many individual motor vehicles were programmed to go from a point A to a point B, with optimum efficiency and safety for all concerned.Â The designing of such a complex "driving system" would seem prohibitively expensive, although current technology and expertise to build it, if money and labor hours were super-abundant, already exists.

Nothing I have written (I hope) has interpretable as suggesting that a WATSON-like program should be by machine logic, alone, "driven."

The "science" of medicine is neither perfected or perfectible.Â But neither are human knowledge, human cognition, nor human judgment.Â And it is doubtful that the smartest medical school graduate, nor the smartest post-school-experienced human physician or surgeon (or RN, or pharmacist, or medical first responder) is perfect or perfectible.Â

The best any human can do, unaided by the latest "wisdom" in medical science, together with the latest technology of medical science, vastly under performs what the smartest, most creative, most disciplined humans can do WITH THE WISE UTILIZATION OF the highest and best wisdom and tools.Â And to be even more general about this phenomenon, it can safely be said that humans without mankind's accumulated wisdom is as hamstrung as those tools are without mankind.

Surely we would not avoid relying upon a motorized wheel chair for a non-ambulatory patient on grounds that, unaided by human judgement (or a complex computerized system comparable to the automobile traffic kind of system cited above) it would either be immobile or would run indescriminately into walls and furnishings and equipment and any human who would fail to get out of its way.

Now, all that having been said, just a few of the things a WATSON-like system aid a wise user in doing better than, say, with a stone hatchet, include:

1.Â It could reduce the record-writing personnel time required of physicians and RNs, byenabling a checklist of items which should be performed, which could be checked off as completed.Â One of the leading causes of RN burnout today is the amount of time required doing paperwork.Â I won't go into how many reasons that paperwork is necessary, in absence of a faster, less time-intensive way of accomplishing all it serves to do.Â Suffice to say that the paperwork is non-expendable until and unless a less cumbersome system is provided;

2.Â Physicians must leave instructions for RNs, for pharmacists, for billing purposes...Â Ask any surgeon how frequently his instructions are misread or misinterpreted.Â Ask any physician or pharmacist how frequently a prescription is misread or misinterpreted.Â A WATSON-like program should not be expected to override a doctor's instructions or a doctor's prescription, but it should have no problem applying limits, such as the limits of what medications should not be given to a child, or an elderly patient, or to a patient allergic to it...Â Once such limits are fed into a logic system, any violation of such a limit should virtually instantaneously balk and ask for confirmation.Â If only a physician is qualified to confirm, then only the ordering physician or another physician designated by him if he cannot be contacted in a timely manner, should be required to alter or override the "machine limit objection";

3.Â No diagnostician is capable of considering each and every possible syndrome suggested by any one given set of symptoms.Â While 99% of the patients any given doctor sees may have one of a short list of syndromes, that is exactly the reason he may be unlikely to recognize and exception quickly.Â In the process of making a diagnosis, there are many things to consider.Â A single test result may rule out half the possible syndromes which could cause a sub-set of symptoms known.Â Another may rule out, say, 80% of those.Â Some acute symptoms need to be treated quickly on their own merits, but may negate the value of certain tests that need to be performed to further narrow the likelihoods.Â Some actions, therefore, not only must be performed quickly but, also, in a certain order, lest one test performed in the wrong order may obfuscate the results of another.Â One procedure in an emergency room may obviate the choice of another.Â Patients have a right to choose, too, what expense will be undergone, what heroics are not desired by the patient to prolong life in some situations.Â A WATSON-like system could be fed such information quickly, and certain procedures that neither a physician nor a patient would wish to be performed my -- in absence of certain permissions -- be advisable by hospital and doctors to reduce risk of being sued for NOT doing them.Â If something has to be signed to prevent such risk-aversion (though possibly desired by no one) protocols, and it is signed, and a sophisticated WATSON-like program could, upon being told a particular risk-aversion protocol is about to be performed, NOTIFY that both doctor and patient have signed off on avoiding it...

4.Â The above are merely representative and nowhere near exhaustive.Â Hundreds, or perhaps even thousands, of such "aids" to diagnosis, treatment, risk aversion, care... even diet issues, could be both sure to be considered, and dispensed, or avoided, quickly and "wisely."Â The list goes on...

Notice that I have preferred the word "wise" and its various forms above to "knowledge."Â Mankind does not have "knowledge."Â Mankind must muddle through on far less than all information about everything.Â Our cumulative "wisdom" is -- indeed has to be -- updated and upgraded and from time to time totally reversed or overthrown by new information and/or new syntheses that better rationalized reasons why an old synthesis has encountered so many anomalies.Â The best we have, therefore, is our best-educated, and best analyzed guesses.Â A WATSON-like system could only optimize the accumulated wisdom to any given date, and provided with the information that physician A might do one thing in a given scenario, while another may do another.Â The odds of the outcome of the one might better the odds of the other, given identical scenarios... but no two scenarios are ever precise duplicates.Â But a diagnostician or a surgeon could virtually instantly select one procedure over the other in such a case.

Machine logic and machine storing and retrieval of data exceed human capacity to store and "process" data to astronomical proportions.Â So far, machines must, in some "wise" way be steered in what they do.Â WATSON should not practice medicine.Â But doctors should not avoid any tool because it needs to be "human driven," either.Â If that were a criterion, even a stone ax would be off limits.Â Â Â Â

Quote: I am not arguing here that that WATSON-like technology has come to a state of being applicable now.

Answer: Will it ever , in medicine ? The problem being is , WatsonÂ is TOLD / programmed to believe Vioxx was good for you ? Watson would be as 'good as' the best doctor BUT only as good. Watson can NEVER get to say , "Vioxx is bad for you" ? ONLY that which is programmed inÂ Watson will ever be available to Watson. The WEIGHT of the information programmed such as Linus Paulings' weightÂ versus Dr. Christophers weight , versus the weight of Crick , alleged thief , or Lendon Smith , alleged criminal , Pasteur , alleged thief , etc ? Â If Watson could be 'turned loose' WITH the program which sorts and finds correlations in medical studies , data-mining , THEN Watson could reign supreme in Medicine. Until then , Watson , available as it is NOW should be able to be easily programmed to be a VERY good consultant to have in the hospital IF the proper records and questions asked of the patient ? One would / could easily have Ipads in the ICU available in any language to record all symptoms and previous problems , which combined with common medical tests should ascertain the problem in 99.9% of all people. Imho.