Kaltsas et al attempted to understand the clinical and epidemiological impact of transitioning from a 2-step algorithm, which involved screening with GDH followed by a CCNA, to NAAT for the diagnosis of CDI in a major cancer hospital [
183
]. Test performance for 128 samples was assessed in the context of symptoms, severity of illness, and patient outcomes. Two time periods were evaluated: May to August 2008 and March to May 2010 [
183
]. For both time periods, CDI cases were defined as having clinical symptoms including diarrhea (84%), fever and abdominal pain (4%), nausea and vomiting (2%), abdominal pain, leukocytosis, or sepsis (2% each), and fever alone (1%) with a positive NAAT or a positive CCNA [
183
]. Different NAATs were used in the first compared with the second time period and no information was provided on overall test positivity or other indicators of the prevalence of CDI in the tested population. Testing for CDI was performed on diarrheal (84%) and nondiarrheal (16%) stool samples in patients in whom it may be very difficult to interpret the true clinical significance of diarrhea, namely cancer patients undergoing intensive chemotherapy [
183
]. There was no statistically significant difference in the clinical presentations at the onset of infection and severity of disease between patients positive by NAAT alone compared with those concordant for both NAAT and 2-step algorithm assays [
183
]. Among 23 toxin-negative, NAAT-positive patients who were not treated, the only possible adverse outcome was recurrence in 3 patients; however, only 15 (65%) had diarrhea on the day of testing [
183
]. Recurrence of CDI was more common in patients when both assays were positive than when NAAT alone was positive (31% vs 14%;
P
= .03). In summary, it is not clear what the results mean from this modestly sized cohort of difficult-to-interpret cases (patients with high frequency of multifactorial diarrhea), other than the impact of a 2-fold increase in reported
C. difficile
rates when transitioning to the more sensitive, but probably less specific NAAT method [
183
].

Longtin et al assessed the impact of diagnostic test methods on CDI rates and the occurrence of complications based upon the tests used to diagnose CDI [
184
]. This was a prospective cohort study in Quebec over a 1-year period [
184
]. CDI was defined by documented diarrhea of ≥3 loose or liquid stools in <24 hours and symptoms lasting ≥24 hours in combination with a positive test for toxin-producing
C. difficile
or clinical diagnosis based upon histopathology or presence of pseudomembranes on colonoscopy [
184
]. Structured data collection forms were used to collect information prospectively regarding complications and whether patients with positive tests met the case definition. All samples submitted to the laboratory were tested by a NAAT that detected the toxin B gene and a 3-step algorithm that began with screening for GDH followed by toxin A/B EIA testing [
184
]. Samples positive by both methods (NAAT and 3-step algorithm) were considered positive for
C. difficile
. GDH-positive, toxin EIA-negative samples were retested using a CCNA [
184
]. Only NAAT results were reported to clinicians and infection control. A total of 1321 stool specimens from 888 patients were assessed over the 1-year period, of which 17% were positive by NAAT and 12.3% were positive by the 3-step algorithm [
184
]. There were 85 cases of healthcare-associated CDI detected by NAAT whereas only 56 of these cases were diagnosed by EIA/CCNA (
P
= .01). Complications (ie, 30-day mortality, colectomy, ICU admission, or readmission for recurrence) were more common among patients positive by both test methods (NAAT and 3-step algorithm) compared with cases detected by NAAT alone (39% vs 3%,
P
< .001). The major limitation of this study was that it was performed at a single center and only some of the specimens were tested by a recognized gold standard method (ie, CCNA). That said, the results support the findings by Planche and colleagues discussed below [
185
].

This all sounds ludicrous until we realise that our algorithms are increasingly being made in our own image. As we’ve learned more about our own brains, we’ve enlisted that knowledge to create algorithmic versions of ourselves. These algorithms control the speeds of driverless cars, identify targets for autonomous military drones, compute our susceptibility to commercial and political advertising, find our soulmates in online dating services, and evaluate our insurance and credit risks. Algorithms are becoming the near-sentient backdrop of our lives.

The most popular algorithms currently being put into the workforce are deep learning algorithms. These algorithms mirror the architecture of human brains by building complex representations of information. They learn to understand environments by experiencing them, identify what seems to matter, and figure out what predicts what. Being like our brains, these algorithms are increasingly at risk of mental-health problems.

Deep Blue, the algorithm that beat the world chess champion Garry Kasparov in 1997, did so through brute force, examining millions of positions a second, up to 20 moves in the future. Anyone could understand how it worked even if they couldn’t do it themselves. AlphaGo, the deep learning algorithm that beat Lee Sedol at the game of Go in 2016, is fundamentally different. Using deep neural networks, it created its own understanding of the game, considered to be the most complex of board games. AlphaGo
learned
by watching others and by playing itself. Computer scientists and Go players alike are befuddled by AlphaGo’s unorthodox play. Its strategy seems at first to be awkward. Only in retrospect do we understand what AlphaGo was thinking, and even then it’s not all that clear.

To give you a better understanding of what I mean by thinking, consider this. Programs such as Deep Blue can have a bug in their programming. They can crash from memory overload. They can enter a state of paralysis due to a neverending loop or simply spit out the wrong answer on a lookup table. But all of these problems are solvable by a programmer with access to the source code, the code in which the algorithm was written.

Algorithms such as AlphaGo are entirely different. Their problems are not apparent by looking at their source code. They are embedded in the way that they represent information. That representation is an ever-changing high-dimensional space, much like walking around in a dream. Solving problems there requires nothing less than a psychotherapist for algorithms.

Take the case of driverless cars. A driverless car that sees its first stop sign in the real world will have already seen millions of stop signs during training, when it built up its mental representation of what a stop sign is. Under various light conditions, in good weather and bad, with and without bullet holes, the stop signs it was exposed to contain a bewildering variety of information. Under most normal conditions, the driverless car will recognise a stop sign for what it is. But not all conditions are normal. Some recent demonstrations have
Coclico Womens Jul Slide Pump Cubic Sabbia visit online sale 2014 outlet pay with paypal reliable for sale EVPtia
that a few black stickers on a stop sign can fool the algorithm into thinking that the stop sign is a 60 mph sign. Subjected to something frighteningly similar to the high-contrast shade of a tree, the algorithm hallucinates.

on prognostic grounds in certain anatomical patterns of disease or a proven significant ischaemic territory (even in asymptomatic patients). Significant LM stenosis, and significant proximal LAD disease, especially in the presence of multivessel CAD, are strong indications for revascularization. In the most severe patterns of CAD, CABG appears to offer a survival advantage as well as a marked reduction in the need for repeat revascularization, albeit at a higher risk of CVA, especially in LM disease.

Recognizing that visual attempts to estimate the severity of stenoses on angiography may either under- or overestimate the severity of lesions, the increasing use of FFR measurements to identify functionally more important lesions is a significant development (Section 5.4).

It is not feasible to provide specific recommendations for the preferred method of revascularization for every possible clinical scenario. Indeed it has been estimated that there are > 4000 possible clinical and anatomical permutations. Nevertheless, in comparing outcomes between PCI and CABG,
Tables 8 and 9
should form the basis of recommendations by the Heart Team in informing patients and guiding the approach to informed consent. However, these recommendations must be interpreted according to individual patient preferences and clinical characteristics. For example, even if a patient has a typical prognostic indication for CABG, this should be modified according to individual clinical circumstances such as very advanced age or significant concomitant comorbidity.

NSTE-ACS is the most frequent manifestation of ACS and represents the largest group of patients undergoing PCI. Despite advances in medical and interventional treatments, the mortality and morbidity remain high and equivalent to that of patients with STEMI after the initial month. However, patients with NSTE-ACS constitute a very heterogeneous group of patients with a highly variable prognosis. Therefore, early risk stratification is essential for selection of medical as well as interventional treatment strategies. The ultimate goals of coronary angiography and revascularization are mainly two-fold: symptom relief, and improvement of prognosis in the short and long term. Overall quality of life, duration of hospital stay, and potential risks associated with invasive and pharmacological treatments should also be considered when deciding on treatment strategy.

RCTs have shown that an early invasive strategy reduces ischaemic endpoints mainly by reducing severe recurrent ischaemia and the clinical need for rehospitalization and revascularization. These trials have also shown a clear reduction in mortality and MI in the medium term, while the reduction in mortality in the long term has been moderate and MI rates during the initial hospital stay have increased (early hazard) [
58
]. The most recent meta-analysis confirms that an early invasive strategy reduces cardiovascular death and MI at up to 5 years of follow-up [
59
].

Planche et al sought to validate the reference methods for
C. difficile
diagnosis, namely TC and CCNA testing according to clinical outcomes in an attempt to derive the optimal diagnostic laboratory method [
185
]. This was a large observational, multicenter study of 12420 routinely submitted fecal samples. The authors examined the results of the 2 reference assays (TC and CCNA) along with 4 commercial methods—2 toxin A/B enzyme EIAs, GDH, and a NAAT [
185
]. Limited clinical data were collected (all patients had diarrhea but stool frequency was not known) and outcomes were assessed for 6522 inpatients who were stratified into 3 groups as follows: CCNA positive (group 1; n = 435), TC positive but CCNA negative (group 2; n = 207), and negative by both methods (group 3; n = 5880). On univariate analysis, leukocytosis was greater in group 1 than group 2 or 3, and white blood cell (WBC) counts were similar in groups 2 and 3. However, both groups 1 and 2 had similarly low serum albumin levels compared with group 3; group 2, but not group 1, had a higher mean rise in creatinine than group 3. Both groups 1 and 2 had similarly longer mean lengths of stay (before and after testing) than group 3. All-cause 30-day mortality was markedly higher in group 1 (16.6%) than group 2 (9.7%) (
P
= .022). The mortality in group 2 was not significantly different from the control group (8.6%) [
185
]. When the analysis was performed using NAAT in place of TC, the findings were similar, with the absolute difference in mortality between patients who were CCNA positive vs those with NAAT positive but CCNA negative of 6.9% (
P
= .004). The combination of GDH immunoassay plus toxin EIA (TechLab assay) was almost identical in performance to CCNA. In a multivariate logistic regression model, group 1 patients were older and had greater leukocytosis, serum creatinine rise, depressed albumin, and 30-day mortality compared with group 3 [
185
]. Lengths of stay were not independently associated with group 1, and all other group multivariate comparisons, including mortality in group 1 vs 2, were not significant. The failure to find a mortality difference in groups 1 vs 2 on multivariate analysis may be due to the much smaller number of patients in group 2 than in group 3. Another limitation was the relatively low prevalence of true disease in the tested population based upon the positivity rate of either the CCNA (5.9%) or TC (8.3%); this reflected national endemic rates of CDI at that time.

Clinical outcome data were available for 69% (143/206) of inpatients with discordant reference method results. Of these patients, 75 (52%) who were TC positive but CCNA negative received no CDI treatment. Among the 4 of 75 cases that were TC positive and CCNA negative who died and did not receive CDI treatment, none had a diagnosis of this infection on their death certificate. Also, 64 of 143 (45%) patients with a discordant reference method result did not have diarrhea recorded on their stool chart; for the remainder of the patients, the median duration of diarrhea was 2 days.