​Tranexamic acid (TXA) is now widely accepted as an effective
means to reduce blood loss during spinal deformity surgery, and it is now being
adopted for any fusion surgery in which significant blood loss is anticipated. Most
of the spine surgery literature has evaluated IV TXA, typically administered as
an initial bolus followed by a continuous infusion for the remainder of the
case. The total joints literature has demonstrated the efficacy of oral and
topical TXA, though the spine literature has not evaluated the PO preparation.
Advocates of PO TXA note the reduced cost and ease of administration. In order
to compare IV and PO TXA in spinal fusion, Dr. Yu and colleagues from Detroit
performed an RCT in which 83 patients undergoing lumbar or thoracolumbar fusion
were randomized to receive IV or PO TXA. The IV medication was given as a 1 g
bolus prior to incision followed by a second 1 g bolus prior to closure. The
patients randomized to PO TXA received 1.95 g PO 2 hours prior to incision. The
average fusion length was 3.8 levels, 15% underwent pedicle subtraction
osteotomies, and 26% underwent interbody fusions. The two groups were very
similar at baseline, though the IV group had a significantly lower BMI (28.5
vs. 32.1) and lower baseline platelet count (205 vs. 240). The IV group was
also less likely to undergo an interbody fusion (16% vs. 38%). The primary
outcome, drop in hemoglobin, was not significantly different between the two
gropus (3.36 g/dL IV vs. 3.43 g/DL PO). The other outcome measures, including
estimated blood loss, drain output, rate of post-operative transfusion, and
rate of thromboembolic events, were not significantly different between the two
groups. The authors concluded that PO TXA was equally as effective as IV TXA
and recommended the use of PO TXA due to its lower cost ($14 vs. $53) and ease
of administration.

The authors should be congratulated for successfully performing
a Level 1 RCT addressing a clinically relevant question. The results strongly suggest
that PO and IV TXA are equally as effective and have similarly safe side effect
profiles. The authors do not comment on blinding, so one must assume the study
was not blinded. It would have been relatively easy to blind the surgeons (and
potentially the patients) to group assignment, though it is not clear that
would have affected the results in a meaningful way. The main outcome measure,
drop in hemoglobin, is not subjective and was unlikely subject to bias. The
only other concern with the methodology is that the authors did not use a continuous
IV infusion during surgery, and IV TXA given as a bolus is likely not circulating
at a sufficient concentration to be effective in surgeries over four hours long
(the average surgery was about 4 ½ hours). As such, the IV TXA patients undergoing
longer surgeries may have had lower blood loss if a continuous infusion was
performed for the duration of the case. While the conclusion that IV and PO TXA
are equally effective is likely valid, it is unclear if PO TXA should be widely
adopted. In the scope of a spinal fusion surgery that likely costs tens of
thousands of dollars, a $40 savings is negligible. The authors do point out
that changing to PO TXA could save the US healthcare system $20 million per
year, though sadly that is a rounding error when it comes to national
healthcare expenditures. The argument that administering PO TXA is easier than
IV TXA is also questionable, as it requires that the medication is given 2
hours prior to incision. Some patients do not arrive until less than two hours
before surgery, and it is easy to imagine that the dose is missed or delayed in
the pre-operative holding area. At my institution, TXA is administered by
anesthesia along with pre-operative antibiotics, and this is confirmed at the
time out. In this model, if the TXA is overlooked, it is caught at the time out
and administered prior to making the incision. This paper makes it clear that
PO TXA is a reasonable option in systems that find using it advantageous. A
more pressing question is defining the indications for TXA use. Given its good
safety profile, it seems reasonable to consider using it in any lumbar or
thoracolumbar fusion.

Please read Dr. Yu’s article on this topic in the June 1
issue. Would you consider using PO TXA at this point based on this article? Let
us know by leaving a comment on The Spine Blog.
Adam Pearson, MD, MS
Associate Web Editor

​Tricortical iliac crest was the original bone graft used for ACDF, and it has excellent structural and biological properties. It is osteoconductive, osteoinductive, and osteogenic and had a stiffness similar to the vertebral bodies. Unfortunately, harvesting the iliac crest is a potentially morbid procedure that can be complicated by pain or infection. Additionally, it takes up valuable time in the operating room. Given these concerns, surgeons transitioned to the use of allograft and subsequently PEEK cages. Recently, other bone graft substitutes such as BMP, ceramic-based synthetics (CBS), mesenchymal stem cells (MSC), and bone marrow aspirate (BMA) have come into use without much evidence regarding their efficacy. In order to better understand the effectiveness of the various bone graft substitutes, Dr. Stark and colleagues from Texas performed a systematic review to compare fusion rates across the various bone graft substitutes. They included all studies that evaluated one or two level ACDF performed for degenerative conditions and had at least 6 months of radiographic follow-up with either x-ray or CT scan to evaluate fusion status. They determined the mean fusion rates across studies for the various bone graft substitutes and combinations of substitutes. All studies including BMP reported 100% fusion rate, though this product was also associated with the highest dysphagia rates (85% for 3 studies using BMP+PEEK cages and 58% for BMP+structural allograft). Ten studies used allograft alone and had a mean fusion rate of 87%, though this increased to 94% when one outlier study reporting a rate of 35% was excluded. Structural allograft or a PEEK cage combined with MSCs resulted in fusion rates around 90%. The lowest fusion rates were seen for CBS alone, with a mean fusion rate of 81%.

The authors have done a nice job reviewing and synthesizing the highly variable literature on this topic. This paper shares the same limitations with all systematic reviews, namely that the quality of the data is only as good as the papers that were included. The main limitation is that every study had its own definition of fusion, which could vary in terms of duration of follow-up, use of radiograph or CT scan, or specific criteria used to determine fusion. The patient populations were also likely quite variable in terms of characteristics that could affect fusion rate such as proportion of smokers and diabetics. The products were also used in many combinations, and the authors did not separate structural allograft from demineralized bone matrix in the analysis. Despite these limitations, the data is fairly convincing that allograft alone yields fusion rates at least as high as the other options, with the exception of BMP. The authors noted the complications associated with BMP use in anterior cervical surgery, but they did not go so far as to discuss that BMP is essentially contra-indicated in anterior cervical surgery and is rarely if ever used at this point. The CBS products had the lowest fusion rates, likely due to their complete lack of osteoinductivity. Given the relatively good fusion rates with allograft alone, the authors concluded that there is no evidence to support adding products such as MSCs or CBS to allograft. Structural allograft is easy to use, results in high fusion rates, and is inexpensive, so the authors conclusions appear valid despite the relatively low quality of the studies they were able to include in their analysis. For patients at risk for pseudarthrosis such as smokers and diabetics, the ideal bone graft remains undefined.

Please read Dr. Stark's article on this topic in the May 15 issue. Does this change you view on bone graft substitutes for ACDF? Let us know by leaving a comment on The Spine Blog.

​Surgery for lumbar radiculopathy or claudication is generally elective and done to alleviate pain and improve function. The decision to undergo surgery is related to how the patient and surgeon perceive the likely risks and benefits associated with the operation. While it is impossible to perfectly predict outcomes with surgical and non-operative treatment, patients can be advised on the probability of outcomes and complications as well as the typical recovery periods. Prior literature has suggested that patients retain only a small portion of the information conveyed to them at an office visit and oftentimes do not comprehend even relatively basic concepts pertaining to their condition, planned treatment, and likely outcomes. In order to better understand the decision-making leading up to lumbar decompressive surgery, Dr. Rehman and colleagues from McMaster University in Ontario performed a qualitative study in which they interviewed 12 patients after the decision was made to undergo surgery and before they underwent the operation. They also interviewed their six surgeons. Using inductive content analysis, they classified the content of the interviews into broad themes and compared patient and surgeon experiences around the decision-making process as well as their expectations. The authors documented that the patients had relatively limited recall about their condition, treatment, and recovery. The surgeons also felt that patients had a relatively limited understanding despite their efforts to convey information using aids like spine models and MRI images. Patients also believed that decompressive surgery would improve both their leg and back pain, while the surgeons were adamant that they informed patients that only their leg pain was likely to improve. While all patients went through a consent process in which risks were discussed, they tended to remember only serious—and very rare—complications such as paralysis. Patients tended to consult family, friends, and the internet to gather more information, accurate or not. They reported making the decision to proceed with surgery based on the severity of their symptoms and not a careful calculation regarding the risks and benefits of surgery.

The authors have done a nice job performing a qualitative study on a topic that can probably only be studied using such methods. Efforts to evaluate the decision-making process using validated outcome measures such as decisional conflict scales would have likely missed much of the content that was captured with the qualitative approach. The results of this study come as no surprise to surgeons who go through this process with patients on a daily basis. Patients generally arrive in a state of distress and are simply looking for a way to relieve their pain. While shared decision making experts extoll the virtues of an arithmetic calculation based on likely risks, benefits, and ultimate utility of surgery, such a process is foreign and unsatisfying to most patients. Many patients want a surgeon to recommend the treatment most likely to help them and expect that the surgeon has done the "calculations" behind such a recommendation. There is a subset of patients who do engage in the traditional, semi-quantitative shared decision-making process, and surgeons need to be able to judge their patients in terms of what type of decision support they require. The current paper did not record and analyze the actual content of the office visit at which the decision to proceed with surgery occurred, and that could have added some more objective data about what was conveyed and what was absorbed. Those details may actually matter little, as patient perceptions and expectations are what shape satisfaction with outcomes. Even if the surgeon did an excellent job explaining the condition, planned treatment, and likely outcomes, if the patient did not understand or retain the information, it will not shape their expectations or satisfaction. Studies such as these should serve as good reminders to surgeons that patient expectations may not be realistic, and multiple conversations before and after surgery are likely necessary to bring their expectations in line with reality.

Please read Dr. Rehman's article on this topic in the May 15 issue. Does this change how you view the surgeon's role in setting patient expectations? Let us know by leaving a comment on The Spine Blog.

​Frailty has received much attention in the medical literature recently, and multiple spine surgery publications have reported that it is a risk factor for complications. Given that frailty indices take into account comorbidities, disability, and limited physiological reserve, this finding comes as no surprise. A more interesting question is whether or not frailty is modifiable. In order to better assess that question, Dr. Yagi and colleagues from Japan analyzed a prospectively collected database of 240 patients undergoing major surgery (average of 10 vertebral levels fused) for adult spinal deformity (ASD). Based on the medical record, they classified the patients as robust, prefail, or frail using the modified frailty index. As expected, the complication rate was higher for the frail groups. The novel aspect of this study is that the authors assessed how well the comorbidities contributing to frailty were under control. For example, in diabetic patients they determined if the hemoglobin A1C was less than 7% and, in hypertensive patients, they determined if blood pressure was less than 180/110. They classified patients whose comorbidities were being appropriately managed according to guidelines as those with good control of frailty, and those that were not being managed well as having poor control. The average age of their cohort was 58, and 92% were women. Fifty-nine percent were classified as robust, 34% prefrail, and 7% frail. After combining the prefrail and frail patients, they found that 72% had good control, and 28% had poor control of frailty. The prefrail and frail patients had worse baseline and two-year sagittal imbalance and SRS-22 scores. These patients also had a higher rate of complications. When comparing outcomes between patients with good and poor control of frailty, there were no significant differences in radiographic outcomes, SRS-22 scores, or complication rates. The poor control patients had a 26% increase in the odds of a complication compared to the good control group, though this was not statistically significant. The authors concluded that good control of frailty did not improve outcomes.

The authors have addressed a novel question and concluded that frailty is not a modifiable risk factor. Before accepting this conclusion, the study limitations need to be considered. For one, the number of poorly controlled patients is relatively low (n=27), and it is likely that the study was somewhat underpowered to detect differences between the good and poor control groups (for example, a 26% increase in the odds of a complication in the poor control group was not significant). Additionally, the authors did not assess whether or not efforts were actually made to treat the comorbidity, just whether or not the comorbidity was being controlled sufficiently according to guidelines. It is possible that patients with borderline diabetes were classified as having good control even though they just had mild disease, while some patients with brittle disease who were being treated aggressively were simply unable to meet the guidelines for being considered controlled. To truly answer the question about whether or not frailty is modifiable, patients would have to be randomized to an aggressive program to treat frailty pre-operatively vs. usual care. The current study supports the prior literature that shows frailty is associated with worse outcomes. While it raises the question of whether or not frailty is modifiable, I do not think it offers sufficient evidence for us to give up our efforts to optimize our patients pre-operatively.

Please read Dr. Yagi's paper on this topic in the May 15 issue. Does this change how you view the role of frailty in surgical decision making? Let us know by leaving a comment on The Spine Blog.

​Cervical disc arthroplasty (CDA) has been available in the United States for over a decade. However, it has not been widely adopted despite multiple IDE RCTs that have reported better results than ACDF. Dr. Lavelle and colleagues published the 10 year follow-up data on the RCT comparing the BRYAN cervical disc to ACDF for single level pathology in the May 1 issue. Over 450 patients were originally randomized to either CDA or ACDF using an anterior plate and allograft, and approximately 50% were available for 10 year follow-up. At enrollment, the average age was 45, all patients had radiculopathy or myelopathy, and patients with significant spondylosis, collapsed disk space, kyphosis, or facet arthropathy were excluded. The primary endpoint was "overall success", which required a greater than 15 point improvement on the Neck Disability Index (NDI), no decline in neurological function, no serious adverse events related to the implant, and no subsequent surgery at the index or adjacent levels. At 10 years, 81% of CDA patients and 66% of ACDF patients achieved "overall success" (p=0.005). The CDA group also improved about 7 points more on the NDI (38 points vs. 31 points, p=0.01). Neck and arm pain VAS scores also favored CDA, though the differences were not significant. The authors did not report the overall reoperation rates for the two groups, though they noted there was a trend towards a lower reoperation rate for adjacent segment disease in the CDA group (10% vs. 16%, p=0.15). They reported 6 reoperations for pseudarthrosis in the ACDF group and 3 device removals in the CDA group related to implant complications. The serious device-related adverse event rate was 4% for the CDA group and 5% for the ACDF group.

This study represents some of the best long-term data available on CDA. The results were similar to the short and medium term studies that showed modest benefits for CDA compared to ACDF. Like most long-term studies, this one is limited by loss to follow-up, which puts the study at risk for attrition bias. However, loss to follow-up was similar for the two groups, and it seems unlikely that those lost to follow-up were markedly different for the two treatment groups. The authors could have supported this hypothesis by analyzing the characteristics of the patients lost to follow-up in each group. They could have more clearly reported the reoperations in each group, as it is somewhat hard to determine the overall reoperation rates from the data provided. A major question about CDA is why it has failed to gain popularity despite promising trial results. The inclusion and exclusion criteria used in this study may explain the low rates of adoption. The enrolled patients were young, had single level disease, and no significant spondylosis. The vast majority likely had acute, soft disk herniations. This type of patient is relatively rare in most spine practices, with the majority of radiculopathy and myelopathy patients presenting with multilevel spondylosis with central and foraminal stenosis due to disk osteophyte and uncovertebral hypertrophy. These patients would not have been included in this study, and most surgeons consider that type of pathology a contra-indication to CDA. It may be that most spine surgeons do not see enough patients who are appropriate candidates for CDA to dedicate the time and energy to learning the technique and progressing along the learning curve required for its use. There is also concern about the very long-term outcomes for these devices, given that they tend to be indicated in relatively young patients. In this study, the average patient will likely live an additional 35 years or more, and it is unclear what will happen to these devices over decades. Most total joint replacements have a substantially shorter lifespan than the patients in this study. Finally, the advantages of CDA are relatively modest compared to ACDF, even in the highly selected populations enrolled in the industry-sponsored IDE trials. The patient reported outcomes are similar for CDA and ACDF, and the decrease in adjacent segment disease is relatively minor. While CDA does seem safe and effective, it is unclear if it will ever be widely adopted.

Please read Dr. Lavelle's article on this topic in the May 1 issue. Does this change your views about cervical disc replacement? Let us know by leaving a comment on The Spine Blog.

​Intraoperative neuromonitoring (IONM) is frequently used during high-risk spinal procedures and has been shown to be beneficial in spinal deformity surgery. The literature has demonstrated that neurological dysfunction due to spinal cord and nerve root compression resulting from correction maneuvers can be detected and is frequently reversible. Less is known about the benefits of IONM in other high-risk procedures such as decompression of OPLL and resection of spinal cord tumors. In an effort to better understand when IONM can help prevent neurological injury—as opposed to simply alerting the surgeon to an irreversible event—Dr. Yoshida and colleagues from multiple institutions in Japan evaluated IONM reports and neurological outcomes in over 2,800 patients undergoing high-risk spine surgery. Cases were classified as deformity correction, decompression of cervical and thoracic OPLL, and treatment of intramedullary and extramedullary spinal cord tumors. The IONM reports were classified as true positives, false positives, true negatives, and false negatives based on the conclusions of the monitoring team at the end of the case and the neurological exam on post-operative day number one. They also classified cases as "rescues" when there was an IONM alert indicating at least a 70% amplitude loss of transcranial motor evoked potentials (Tc-MEPs) that then recovered by the end of surgery. Overall, the alerts had a sensitivity of 93%, a specificity of 91%, a positive predictive value of 35%, and a negative predictive value of 99.6%. The low positive predictive value resulted from the relatively low number of true positive alerts compared to false positive alerts. They calculated the rescue rate as 52%, defined as the number of alerts that resolved by the end of the case and had no post-operative neurological deficit divided by these "rescue" cases plus the true positive cases. The rescue rate varied substantially across the different diagnoses, from 82% for cervical OPLL to 32% for intramedullary spinal cord tumor. Deformity had the second highest rescue rate at 61%. Reversing the rod rotation in deformity correction was associated with a rescue rate of over 70% compared to only 40% for reversing a 3-column osteotomy. Alerts occurring during OPLL decompression were also associated with low rescue rates (0% during corpectomy for cervical OPLL and 30% during posterior decompression for thoracic OPLL).

This study is likely the most detailed analysis of IONM alerts and the effects of various interventions on reversing potential neurological damage. The authors did an incredible job assembling a large number of high-risk cases and then doing a deep dive into the IONM reports and post-operative neurological exams for each patient. The most significant limitation of the study is that it is not possible to determine how many patients were truly "rescued" by the intra-operative maneuvers following the alerts as there is such a high false positive rate associated with IONM. Additionally, the actual number of each type of alert (i.e. signal loss during rod rotation) for each diagnosis gets relatively low despite starting with nearly 3,000 cases, so it is hard to draw strong conclusions when there are just a handful of many alert types. The authors also did not evaluate other IONM modalities such as somatosensory evoked potentials or D wave. Nonetheless, this is an extremely detailed analysis IONM during high-risk cases. Use of IONM is standard of care for deformity surgery and spinal cord tumor surgery, though its benefit in myelopathy cases can be debated. While rescue rates were relatively low for certain events in the OPLL cases, there were some reversible deficits that corrected with posture changes or additional decompression. What was missing from the paper was an analysis of the deleterious effects of false positive alerts, including unnecessary maneuvers, prolonged surgery or case abandonment. This paper suggests that IONM frequently yields alerts that prompt a maneuver that appears to prevent neurological deficit and supports its use in these high-risk cases. The jury remains out on its benefit in lower risk cases.

Please read Dr. Yoshida's article on this topic in the April 15 issue. Does this change how you view the benefits of IONM in high-risk spine surgery? Let us know by leaving a comment on The Spine Blog.

​Risk factors for readmission after spine surgery have been
studied extensively, generally using large administrative databases. Smaller
investigations looking at the experience at a single institution have also been
published but have generally included fewer patients and have been relatively
underpowered. While administrative databases include large numbers of patients,
they generally lack the clinical details germane to spine surgery necessary to
draw meaningful conclusions. The Quality and Outcomes Database (QOD) was
created to capture large numbers of spine surgery patients along with the
important details relevant to spine surgery such as patient reported outcomes,
underlying diagnosis, and surgical technique. This database captures spine
surgery patients from 86 institutions in the United States through chart review
and patient questionnaires. The current study used data from over 33,000 lumbar
surgery patients in order to evaluate risk factors for 90-day readmissions. Patients
with deformity, infection, trauma, and tumor were excluded. The authors
classified the readmissions as medical (i.e. DVT, PE, cardiac disease, renal
disease, non-surgical site infection, etc.) or surgical (i.e. surgical site
infection, wound dehiscence, CSF leak, disk reherniation, hardware failure, new
neurologic deficit, hematoma or pain) and developed multivariate regression
models evaluating risk factors for the two types of readmissions. The overall
90-day readmission rate was 6.15%, with 2.5% being readmitted for medical complications
and 3.6% for surgical complications. The risk factors for both medical and
surgical readmission were higher ASA grade, increased number of levels treated,
higher baseline ODI score, and anterior approach. Increased age, male gender, heart
disease, unemployment, fusion, and not smoking were risk factors specific for
medical readmission. Specific surgical readmission risk factors included
increased BMI, female gender, depression, and African American race. With the
possible exception of non-smokers being at higher risk for medical readmission,
these risk factors are consistent with the prior literature on the topic.

The authors have done a nice job using a relatively novel
dataset to explore risk factors for readmission following lumbar surgery. They
identified the usual suspects for readmission—increasing age, medical and
psychosocial comorbidities, and surgical invasiveness. The one anomaly is that
smokers were at somewhat lower risk for medical readmission, though this has
been shown in prior studies. The causal chain behind this association is not
clear, though active smokers may be generally younger and more robust than
non-smokers. Additionally, smokers are strongly motivated to stay out of the
hospital where smoking is not permitted, and they have shorter hospital length
of stay following surgery. While this study does not provide any new
information, it supports what has been shown in the literature and this
consistency helps to validate the QOD database. The real question is whether
this information can be used to identify patients at high risk for readmission
and intervene in some way to lower their readmission rate. Very little has been
published on this, though it seems that diverting increased resources (i.e.
nurse phone calls, visiting nurses, more frequent follow-up with primary care
providers, etc.) to high-risk patients could save resources in the long-run. While
some complications and readmissions are unavoidable, many readmissions could
likely be avoided with increased intensity of post-operative care. Currently,
decisions about which patients to target for more post-operative surveillance
are based on provider perception and patient demand. Models like those in the
current study might be used to select patients for such an intervention in a
more accurate, objective fashion. Hopefully future studies will evaluate such
programs to determine if they are effective.
Please read the article by Dr. Sivaganesan and colleagues in the April 15
issue. Does identifying these risk factors change how you will target patients
for increased post-operative surveillance? Let us know by leaving a comment on
The Spine Blog.

​Spinal epidural abscesses (SEA) are occurring more frequently in our older, sicker population as well as in the increasing intravenous drug user population. These serious infections are difficult to treat and can result in neurological deficit, sepsis, and death. Treatment traditionally involved surgery and IV antibiotics for all epidural abscess patients, though recent literature suggests that medically stable patients without a neurological deficit can frequently be treated successfully with antibiotics alone. While the incidence of SEA is increasing, the absolute number of cases, particularly the number treated at a single institution, remains relatively low and makes powering studies on the topic difficult. In order to obtain a sufficient sample size to study the topic, Dr. Du and colleagues from Cleveland analyzed the NSQIP database and identified 1094 patients who underwent surgical treatment of SEA from 2011 to 2016. The NSQIP database includes outcomes and events up to 30 days following surgery, which allowed them to calculate a 30 day mortality rate of 3.7%. Risk factors for mortality included increased age, higher ASA class (indicating a greater comorbidity burden), diabetes, hypertension, respiratory disease, renal disease, bleeding disorder, metastatic cancer, thrombocytopenia, and receiving a perioperative blood transfusion. Multivariate analysis demonstrated that age over 60 years, diabetes, respiratory disease, renal disease, metastatic cancer, and thrombocytopenia were independent risk factors for mortality. Having 4 or more of these risk factors was associated with a 38% mortality compared to less than 1% mortality for patients with no risk factors. Seventy percent of deaths occurred within 2 weeks of surgery, though 10% occurred between 27 and 30 days post-operatively. Not surprisingly, cardiac arrest and septic shock were strongly associated with death.

The authors have done a nice job using a database to study a topic that is very difficult to study using traditional chart review given the relatively low number of patients who undergo surgery for SEA at any single institution. Additionally, death is a good outcome to study using a large database as it is captured reliably. Nonetheless, the study has all of the limitations associated with a database study, notably that many relevant variables such as neurological status, intravenous drug use, and cause of death were not included. Additionally, the database does not record death beyond 30 days from surgery, yet the data makes it clear that patients were continuing to die at a significant rate even at 30 days out from surgery. The paper provides a good benchmark mortality rate following surgery for SEA. While it did identify risk factors for mortality, none of these come as a surprise, and all are indicative of systemic disease burden which increases mortality risk for all types of surgery. Surgeons need to decide which SEA patients should undergo surgery, and this paper does not help answer that question. While sicker SEA patients are at higher risk for mortality, it is unclear how surgery affects this risk. Some patients may have a survival advantage with surgery due to a higher chance of clearing their infection, while other patients may succeed with antibiotic treatment alone and have an increased risk of mortality due to the physiological stress of surgery. Most patients survive regardless of treatment, and a small minority are so sick that they will likely die however they are treated. To answer this question would require a huge database of patients treated with surgery and medical management that also includes a sufficient number of clinical variables that could be used to create an accurate predictive model. An RCT to address this question is not feasible. It is possible that a Medicare database analysis could provide some further insight, but this administrative database includes only billing data and likely misses some of the key clinical variables. For now, surgeons will probably continue to operate on most SEA patients with neurological deficit, sepsis, or failed medical management.

Please read Dr. Du's article in the April 15 issue. Does this change how you view the treatment of SEA? Let us know by leaving a comment on The Spine Blog.

​Concern about iliac crest bone graft (ICBG) harvest site morbidity has helped create a massive bone graft substitute industry and prompted many spine surgeons to use local bone graft and bone graft substitutes instead of ICBG, despite ICBG resulting in better fusion rates. The major concern relates to ICBG harvest site pain, though increased blood loss, operating time, hematoma formation, infection, and the potential for injury to surrounding structures are other reasons cited to avoid ICBG. While ICBG harvest site morbidity has traditionally been accepted as fact, more recent research suggests that long-term pain or other major complications are in fact quite rare when harvest is performed through the same midline incision used for lumbar fusion. In order to better assess ICBG harvest site pain, Lehr and colleagues from The Netherlands performed a prospective, patient-blinded study in which ICBG was randomly harvested from either the right or left iliac crest and patients were asked to identify from which side the graft was taken and to rate their midline back pain and bilateral iliac crest pain at baseline and then at four follow-up visits out to one year. This study was performed as part of a larger investigation comparing calcium bone graft substitute to ICBG. Ninety-two patients underwent instrumented lumbar fusion for degenerative conditions, with the fusion extending to the lower lumbar spine (L3 or more caudal). The majority (87%) underwent one or two level fusion. All patients had ICBG harvested and placed in one lateral gutter, with the calcium bone graft substitute placed on the contralateral side. The ICBG was harvested by creating a unicortical iliac crest window and removing the cancellous bone with gauges. Median harvested bone graft volume was 6 cc. Following surgery, 49% of patients reported that they did not know from which side the ICBG was harvested or answered inconsistently across the four follow-up visits. Of the 51% who felt they could identify which crest was harvested and answered consistently at all follow-up visits, 48% identified the correct harvest site and 52% identified the wrong crest. Overall, 24% of patients consistently and correctly identified the ICBG site over the first year following surgery. There were no significant differences in median iliac crest pain scores between the harvested and intact iliac crests at any follow-up point, and iliac crest pain scores correlated with midline low back pain scores. Based on this, the authors concluded that ICBG harvest did not result in increased pain at the harvest site, and that concern about harvest site pain should not be the main reason to avoid ICBG harvest.

The authors have done an elegant study and compelling analysis, which strongly suggests that most patients do not experience prolonged ICBG harvest site pain. The study design was appropriate, though it does have a few significant limitations. The amount of bone graft harvested seems very low (median of 6 cc), and most surgeons traditionally harvest more than this. A more extensive bone graft harvest could result in higher levels of graft site pain. The authors did not report any complications related to bone graft harvest or discuss any increase in blood loss associated with the procedure. While rare, complications related to graft site harvest do occur (i.e. hematoma, infection, injury to structures in the sciatic notch), and any additional surgical work increases blood loss. They also note that median bone graft harvest only took 7.5 minutes, which might be related to the low volume of graft obtained. A more thorough harvesting technique, irrigation, control of bleeding bone, and closure typically takes longer than that. Despite these limitations, the current study and recent literature suggests that pain and morbidity associated with ICBG harvest is much lower than suggested by the historical literature on the topic. The earlier studies frequently included harvest through a separate incision and removal of the entire outer cortex of iliac crest. At this point, surgeons may be avoiding ICBG harvest more out of an effort to reduce time in the operating room than out of concern for harvest site morbidity. However, this study and others suggest that ICBG should be strongly considered as a graft option given its better osteoinductive properties and low morbidity associated with its harvest.

Please read the article on this topic in the April 15 issue. Does this change your view about the morbidity of ICBG harvest? Let us know by leaving a comment on The Spine Blog.

​The urgency of lumbar discectomy for patients with significant motor weakness has been debated, with most studies on the topic demonstrating minimal benefit for more urgent decompression. However, there is some evidence that decompression in under 48 hours following the onset of cauda equina syndrome leads to better functional outcomes. No study has looked at the role of immediate surgery for lumbar disc herniation associated with motor weakness, likely because few discectomies are performed in under 48 hours of the onset of weakness. In order to assess the effect of immediate discectomy on motor outcomes, Dr. Petr and colleagues from Innsbruck retrospectively reviewed a series of 330 lumbar disc herniation patients who presented with motor deficit. They divided the patients into two cohorts depending on whether they underwent surgery within 48 hours of the onset of weakness or beyond 48 hours. The immediate surgery group included 126 patients, while the delayed surgery group included 204. There were no significant demographic or baseline clinical differences between the two cohorts. Approximately 60% of the herniations were at L4-L5 and about 20% were at L5-S1. Twenty-four percent of patients had mild (grade 4/5) weakness, 53% had moderate (grade 3/5) weakness, and 23% had severe (grade 0-2/5) weakness. Postoperatively, the immediate surgery group had a greater improvement in motor strength at discharge, 6 weeks, and 12 weeks follow-up. The difference was not significant for the mild weakness group, in which 96% of the immediate surgery group had complete resolution of motor weakness at 12 weeks compared to 86% in the delayed surgery group (p=0.22). The differences were significant in the moderate and severe weakness groups, with 96% of the severe weakness group undergoing immediate surgery having complete resolution at 3 months compared to 64% of the delayed surgery group. The immediate surgery group also had greater resolution of sensory deficits. There were no differences in the rate of residual sciatica between the immediate and delayed surgery groups.

The authors have presented a thorough retrospective analysis of their lumbar discectomy cohort that suggests that discectomy patients with a preoperative motor deficit have greater recovery of motor function if their surgery is performed within 48 hours of the onset of the deficit. While this intuitively makes sense, prior studies have not consistently demonstrated this. One reason for this discrepancy is that prior studies tended not to look at the effect of immediate surgery within 48 hours of symptom onset as surgery is rarely performed this quickly for logistical reasons. It is impressive that the authors were able to perform surgery within 48 hours on over 1/3 of their lumbar discectomy patients with motor deficits. The authors point out that this is not an RCT, so no strong conclusions regarding causation can be drawn. One limitation is that they did not report the duration of symptoms for patients undergoing surgery beyond 48 hours after the onset of symptoms. It is possible that some of these patients had long-term motor deficits (i.e. months or more), and these were probably less likely to improve. Many patients present with an acute motor deficit that resolves relatively quickly without surgery, so it is not clear to what degree immediate surgery changed the natural history of the motor deficit. Given that the timing of surgery was not randomized, the two groups were likely different at baseline in ways not measured by the study. A future study RCT comparing immediate surgery to delayed surgery and no surgery would answer the question, though it is not clear that such a study would ever be performed due to ethical concerns and strong patient preferences. I see very few patients in my practice who present with a motor deficit that has been present for less than 48 hours, primarily due to logistical issues around referrals and imaging. For that reason, I am not sure that immediate surgery is even logistically feasible for most healthcare systems.

Please read Dr. Petr's article on this topic in the April 1 issue. Does this change your view on immediate surgery for lumbar disc herniation with motor deficit? Let us know by leaving a comment on The Spine Blog.

​As methicillin resistant Staph aureus (MRSA) infections became more prevalent, surgeons have considered different perioperative antibiotic regimens. Intravenous (IV) cefazolin has been the traditional choice for perioperative antibiotic prophylaxis, typically given within 1 hour prior to skin incision and continued for 24 hours post-operatively. However, cefazolin does not cover MRSA, so the natural consideration has been to add vancomycin. The use of intra-wound vancomycin powder has been the most publicized approach to this, with many papers suggesting intra-wound vancomycin powder significantly decreased the infection rate in patients undergoing instrumented posterior thoracic or lumbar fusion. Less has been published about the addition of IV vancomycin to the perioperative antibiotic regimen, despite it being an obvious alternative or addition to intra-wound vancomycin. The department of orthopaedic surgery at Brigham and Women's Hospital in Boston changed their policy for perioperative antibiotics for all cases involving implants in 2010, such that patients received both IV vancomycin and IV cefazolin within 1 hour prior to incision and for 24 hours after surgery. Prior to this, patients received only IV cefazolin for the same duration. Dr. Lopez and colleagues performed a pre-post analysis in which the rates of revision surgery for infection following instrumented spinal fusion were compared from 2005-2009 to 2011-2015. The cefazolin-only cohort included almost 1,300 patients and the cefazolin + vancomycin cohort almost 2,000 patients. At baseline, the cefazolin + vancomycin cohort was 4 years older, included more smokers, and fewer trauma patients. Eight percent of the cefazolin + vancomycin cohort also had documentation of intra-wound vancomycin powder application. The overall revision rate for infection decreased from 4% in the cefazolin-only cohort to 2% in the cefazolin + vancomycin cohort, and these results were essentially unchanged when controlling for baseline patient and surgical characteristics. The overall revision rate also decreased from 14% to 7% over the same period. There was no difference in the proportion of different organisms (i.e. MRSA, MSSA, others) isolated on intra-operative cultures between the two cohorts.

The authors have done a nice job looking at a natural experiment performed at their institution, and the results suggest that the addition of IV vancomycin to the typical perioperative antibiotic regimen might help reduce infection rate. The biggest threat to the validity of this study, and any pre-post study, is the potential for unmeasured secular trends acting as confounders. Secular trends represent other changes in practice over time not included in the analysis, and in this case could include the use of intra-wound vancomycin powder, changes in dressings, closure materials or irrigation techniques, or a change in the threshold to return to the operating room to address surgical site infection. While the authors did report that 8% of the cases in the cefazolin + vancomycin cohort also received intra-wound vancomycin powder, this could not be controlled for as no patients received intra-wound vancomycin in the cefazolin-only cohort. Given that the use of vancomycin powder was widely adopted from 2011-2015, the authors may have also not captured all of the cases in which it was used. The analysis would have been cleaner if anterior procedures were not included in the analyses as these procedures (especially ACDF) have much lower infection rates than posterior instrumented fusions. While the only way to convincingly demonstrate that perioperative IV vancomycin reduces surgical site infection would be an RCT, this paper demonstrates that the entire suite of infection control measures that were likely adopted in the later timeframe significantly reduced infection rates. Given that IV vancomycin was a key part of their infection control protocol, I would not change a thing if I were them. However, it is not clear if IV vancomycin should be adopted by other institutions that have seen significantly lower infection rates associated with their infection control protocols that do not include it. The use of IV vancomycin has some downsides, including the logistical challenges associated with its long duration of infusion and the potential for adverse events like red man syndrome. Adverse reactions to vancomycin seem much less common when administered as a powder in the surgical site. Individual institutions and surgeons will have to judge the available evidence to determine if it makes sense to add IV vancomycin to an infection control protocol that already includes intra-wound vancomycin.

Please read Dr. Lopez's article in the March 15 issue. Will this prompt you to add IV vancomycin to your perioperative antibiotic protocol? Let us know by leaving a comment on The Spine Blog.

​The three major options for reconstruction of the disk space during ACDF are iliac crest autograft, structural allograft, and synthetic cages, usually made from PEEK or titanium. While autograft has the highest fusion rate, it is associated with significant morbidity and increased operating room time, so surgeons use it relatively infrequently. Additionally, acceptably high union rates have been reported using allograft and an anterior plate. Allograft is widely used in North America, though it is used less frequently in other parts of the world where it is less widely available or not used for cultural or regulatory reasons. Device companies developed synthetic cages as an alternative to autograft or allograft and have the advantage of being more resilient and not at risk for resorption. However, they are not osteoconductive and potentially limit the volume in the disk space where fusion bone could form. No sufficiently powered RCT has been performed comparing fusion rates between allograft and synthetic cages, and such a study would require a high number of patients given that nonunion is a relatively uncommon event. In order to attempt to answer this question, Pirkle and colleagues from Chicago used the PearlDiver database to compare nonunion rates between over 4,000 ACDF patients treated with allograft and over 2,000 treated with a synthetic cage. The two groups were similar at baseline in terms of tobacco use, diabetes, and number of levels fused. Overall, the allograft group had a nonunion rate of 2% compared to 5.3% in the cage group, a highly statistically significant difference. The authors also performed subgroup analyses stratified by tobacco use, diabetes and number of levels fused. They found a lower nonunion rate for the allograft group in all subgroup analyses, and the differences were statistically significant in 25/26 of the subgroups.

The authors have done a nice job using an administrative database to attempt to answer a question that would be very difficult to answer in a prospective RCT. It is hard to imagine that any group would fund such a study that would likely require thousands of patients to be sufficiently powered, especially for subgroup analyses. This study has all of the limitations inherent in a retrospective, administrative database review. All such studies are affected by potential coding inaccuracies, though with this many patients, that seems unlikely to be a major problem. The biggest limitation is likely the subjectivity of the main outcome, namely a provider coding nonunion. The most serious potential confounder is the possibility that surgeons who use cages are more likely to code for nonunion. Using a synthetic cage results in higher reimbursement than structural allograft, and it is possible that surgeons more attuned to reimbursement issues may also be more likely to diagnose a nonunion and perform a revision procedure. It would have been interesting if the authors had also included an analysis of reoperation rate, which would have theoretically been associated with nonunion rate. Given that a Level 1 study is probably not going to provide the answer to this question, surgeons need to rely on the lower level data available and biological theory when making the decision about graft choice. There is no data suggesting that synthetic cages lead to better outcomes, and they generally cost more than allograft. Combining that with the results of this study would suggest that structural allograft is likely the preferred graft choice. The main motivation to use a synthetic cage may be the irrationally higher reimbursement for a procedure that takes the same amount of time and effort.

Please read Mr. Pirkle's article on this topic. Does this change your thoughts about graft/spacer choice in ACDF? Let us know by leaving a comment on The Spine Blog.

Spine surgeons and patients frequently assume that a longer duration of symptoms in spinal stenosis portends less improvement following decompression due to irreversible changes in nerve tissue subjected to long-term compression. This can lead to a decision to perform surgery sooner rather than later in order to maximize the likelihood of a good outcome. While non-operative treatment is likely less effective for radiculopathy and claudication in the setting of lumbar stenosis, it can be effective in some patients and allow a subset of this elderly, frail population to avoid surgery with its inherent risks and recovery time.1 However, surgeons may avoid or cut short efforts at non-operative treatment due to concerns about delayed surgery resulting in worse outcomes. In order to better understand the effect of duration of symptoms on lumbar laminectomy outcomes, Dr. Movassaghi and colleagues rom Rush University reviewed over 200 lumbar stenosis patients who underwent laminectomy without fusion from 2008-2015 and had at least 3 months of follow-up. They stratified the patients based on duration of symptoms for more than or less than one year. At baseline, the two groups were similar, though the longer symptom duration patients were slightly older and had somewhat worse SF-12 physical function scores. This group also was significantly more likely to undergo a multilevel decompression. At final follow-up, there were no significant differences in Oswestry Disability Index, SF-12, or VAS back or leg pain scores after controlling for baseline differences. There were also no significant differences in reoperation rate or patient satisfaction. Based on these findings, the authors concluded that duration of symptoms did not affect surgical outcomes in lumbar stenosis.

This is a well-done retrospective cohort study that addressed a question that surgeons and patients face daily. The results were somewhat different from those reported from the Spine Patient Outcomes Research Trial (SPORT), which found slightly less improvement following surgery for lumbar stenosis in patients with symptom duration greater than one year.2 However, that study also reported no outcome differences between the symptom duration groups in the degenerative spondylolisthesis cohort and no differences in treatment effect of surgery between the symptom duration groups (the benefit of surgery compared to non-operative treatment) for either the stenosis or degenerative spondylolisthesis cohorts. The Maine Lumbar Spine Study (MLSS) reported lower long-term patient satisfaction for patients who were initially treated non-operatively and subsequently underwent surgery in a delayed fashion.3 The current study is somewhat limited compared to SPORT or MLSS by its retrospective nature, though selection bias cannot be eliminated in any study addressing this question as duration of symptoms cannot be randomized. The authors did do a multivariate analysis to control for baseline differences. While these studies have come to somewhat contradictory conclusions, the overall conclusion seems to be that duration of symptoms may have a small but probably clinically unimportant effect on surgical outcomes in lumbar stenosis. There is no evidence to suggest that non-operative treatment should be avoided out of concern about worsening surgical outcomes due to treatment delay. Treatment of lumbar stenosis is a preference-sensitive decision that should take into account patient symptoms, comorbidities, and goals of treatment. The appropriate time to have surgery is when the patient feels that the likely benefits of surgery outweigh the risks and recovery time associated with the operation.

Please read Dr. Movassaghi's article on this topic in the March 1 issue. Does this change your opinion about the effect of symptom duration on lumbar laminectomy outcomes? Let us know by leaving a comment on The Spine Blog.

​The rates of medical procedures have been studied extensively, though the "appropriate rate" of most procedures remains unknown. However, changes in procedure rates can be illuminating and can reflect changes in medical knowledge that affects practice patterns, change in the prevalence of the underlying disease, change in demand for the procedure among patients, and changes in physician reimbursement. To better understand the rate of elective lumbar fusion over time, Dr. Martin and colleagues from Utah analyzed the National Inpatient Sample administrative billing database from 2004-2015. They excluded fusions that extended outside of the lumbar region and those performed for non-elective indications (i.e. trauma, tumor, or infection). After applying their exclusion criteria, they identified over 2 million elective lumbar fusion admissions. Overall, the absolute number of lumbar fusions increased by approximately 60%, and the age- and gender-adjusted rate increased by 32% over the time frame under study. The steepest increase occurred from approximately 2008-10, with rates essentially flat from 2010-2015. After stratifying by age, there was minimal change in the rate of fusion for those under 65 years old, while the rate increased by 73% for those over 65 from 2004-2015. The authors used a validated algorithm to classify the diagnosis for each case as either spondylolisthesis, scoliosis, disk herniation, spinal stenosis, or degenerative disk disease. Approximately 45% of patients had a diagnosis of spondylolisthesis and 12% were diagnosed with scoliosis, both of which are widely accepted indications for fusion. The remaining 43% underwent fusion for spinal stenosis, disk herniation, and degenerative disk disease, diagnoses for which fusion is frequently not indicated. Total hospital costs reached approximately $10 billion in 2004.

This paper did a nice job documenting elective lumbar fusion rates in the United States over a decade. While the overall rate increased modestly over time, there was a major increase in the rate of fusion for elderly patients over 65. The cause of the increased rate in this population is likely multifactorial. It could be related to surgeons being more willing to offer surgery to this more medically fragile group given improvements in anesthesia and post-operative medical care. Additionally, older patients' expectations may have changed over time, and they may have become more enthusiastic to undergo surgery in an effort to preserve function. Another interesting finding is that the rates of fusion for spondylolisthesis and scoliosis have increased, while the rates for diagnoses with more dubious indications have fallen. Whether this is due to an actual change in practice or simply reflects a change in coding patterns cannot be determined from this study. This paper has all of the limitations of large administrative database studies, namely that there were few clinical details available, and the diagnoses were based on billing codes, which could have been inaccurate. Another limitation is that the denominator of patients with the condition for which surgery was being performed was unknown, so the rates were based on age- and sex-adjusted population numbers. It would have been helpful for the authors to have included data on discectomy and laminectomy without fusion to see how those rates have changed alongside the fusion data. Previous studies have suggested that has been no increase in laminectomy rates while fusion rates were increasing. Overall, the data suggest that surgeons may be operating more frequently for evidence-based indications. An alternative, more cynical explanation is that surgeons have simply changed their coding in order to avoid payment problems with insurance companies.

Please read Dr. Martin's paper on this topic in the March 1 issue. What do you think about how frequently lumbar fusion is performed in the United States? Let us know by leaving a comment on The Spine Blog.

Recent RCTs comparing decompression alone to decompression and fusion for degenerative spondylolisthesis (DS) reached somewhat contradictory conclusions, though these studies made it clear that not all DS patients need to be fused.1,2 However, it remains unclear who does need a fusion. Some studies have suggested that patients with a mobile listhesis, maintained disk height, and sagittally aligned facet joints are at increased risk for progression of listhesis following decompression without fusion, though no large study has confirmed this.3 In order to better understand if baseline radiographic findings can predict failure of decompression alone in DS, Dr. Schar and colleagues from Switzerland retrospectively reviewed a case series of 161 patients undergoing unilateral or bilateral laminotomy (midline sparing surgery) for spinal stenosis. Approximately one third of the patients had at least one level with a spondylolisthesis, and no patients underwent a fusion at the index procedure. Patients with greater than 3 mm of motion on flexion-extension radiographs were excluded. At a median four-year follow-up, 15% of patients had undergone reoperation, 72% of which were for recurrent stenosis and 28% of which were for adjacent segment stenosis. Of the 56 patients with listhesis, 18% underwent a revision surgery for recurrent stenosis. Only 4% of patients without listhesis underwent a reoperation for recurrent stenosis. Disk height and facet angle were not associated with risk of reoperation. The authors did not report patient reported outcomes (PROs).

This is a nice retrospective cohort study that demonstrated that the presence of a "stable," low-grade listhesis is a risk factor for reoperation following midline-sparing decompression. The study also failed to identify any radiographic risk factors for reoperation other than the presence of listhesis. The authors argue that an 18% reoperation rate at 4 years is acceptable and may be preferable to outcomes following fusion. However, this study lacks a fusion comparison group and really does not provide any information to guide the decision about whether or not to perform a fusion for a DS patient. The results do suggest that the reoperation rate using this technique in the absence of listhesis is very low. In order to answer the question about which DS patients benefit from fusion, a study needs to include a large number of DS patients treated decompression alone or decompression and fusion. Such a study would need to include baseline patient characteristics and radiographic studies, PROs, and reoperation rates. This would allow for subgroup analyses to be performed to compare outcomes between decompression alone and decompression and fusion for patients with different baseline characteristics and radiographic findings. More complex is determining the best decompression (i.e. midline laminectomy vs. midline-sparing laminotomy) or fusion (i.e. uninstrumented vs. instrumented vs. interbody) technique. These studies would need to include a much larger number of patients than have been assembled in prior studies in order for the subgroups to be large enough. Until such a study exists, surgeons will need to consider the available data and discuss the pros and cons of including a fusion when deciding on surgical technique with their patients. Given the technical difficulty, increased complication rate, and likely worse outcomes for a revision decompression as compared to a decompression and fusion for adjacent segment degeneration, it may make sense to fuse healthy patients who can tolerate the operation.

Please read Dr. Schar's article on this topic in the February 15 issue. Does this change your opinion about the need for fusion in DS? Let us know by leaving a comment on The Spine Blog.

In an effort to increase the value of surgery, the Centers for Medicare and Medicaid Services (CMS) created the Bundled Payments for Care Improvement (BPCI) program. This program has changed the traditional fee for service model to one in which hospitals and providers are paid a lump sum for the surgical care episode (generally defined from 3 days prior to surgery to 90 days after surgery) independent of the costs and events associated with the episode. The goal is to shift financial risk onto the hospitals and providers in order to motivate them to limit the cost of care while maintaining quality. This payment model has been shown to reduce the cost of care for total joint arthroplasty. In order to determine the effect of this payment model on costs and outcomes in spinal fusion, Dr. Bronson and colleagues from the Hospital for Joint Diseases in New York evaluated their experience with BPCI for non-cervical posterior spinal fusion for degenerative conditions from 2013-2014. They compared costs and outcomes for 350 patients enrolled in BPCI to a baseline cohort of 518 patients treated from 2009-2012. In both cohorts, approximately 94% of patients were classified in DRG 460 (posterior spinal fusion one to eight levels without major complication or comorbidity) with the remainder in DRB 459 (spinal fusion with major complication or comorbidity). The BPCI cohort was more likely to undergo a "complex" surgery, defined as a revision surgery, greater than 3 level fusion, or a procedure including an interbody fusion (i.e. TLIF or PLIF), as compared to the baseline group (45% vs. 23%). Most of this increase in case complexity was driven by increased use of interbody fusion (16% BPCI vs. 2% baseline). They found that the overall Medicare cost for episode was $52,655 in the BPCI cohort compared to $48,913 for the baseline cohort. They calculated that this resulted in a $1.3 million net loss for the BPCI group, despite a shorter length of stay, unchanged readmission rate, and lower rate of discharge to inpatient rehab or skilled nursing facility.

The authors have done a nice job auditing and reporting their experience with the BPCI for spinal fusion. It provides a cautionary tale to institutions considering enrollment in the program. The challenging part of interpreting this paper is that the costs are not itemized, making it impossible to determine what was driving the increased Medicare cost. They also did not perform a micro-cost analysis, so the hospital costs associated with performing the operation were not quantified. This makes it impossible to calculate the margin, which is ultimately what determines the financial viability of any surgical program. While they report a "loss" of $1.3 million, this is relative to what they would have received in Medicare payments outside of the BPCI program. If the institution made changes to reduce their costs associated with surgery (i.e. lower implant costs, lower length of stay, less imaging, lower rate of discharge to facilities, etc.), they may have actually increased their margin despite lower reimbursement by Medicare. Given that they reported increased case complexity including longer fusions and more interbody fusions, this is unlikely given that implant costs likely rose. The authors are correct in stating that a single reimbursement rate for any posterior fusion from one to eight levels regardless of revision status or use of interbody instrumentation is inappropriate. As can be seen from the high standard deviations around the Medicare costs (over 50% of the mean), this is a highly heterogeneous group. The actual hospital costs are likely much more variable given that the Medicare cost is not affected by length of stay or cost of instrumentation and includes a single payment for the inpatient stay based on the DRG. Under the current BPCI model, hospital systems are incentivized to perform the simplest, lowest cost procedure covered by the DRG on the healthiest patients they can find. This could result in the most vulnerable patients with significant comorbidities and complex spine pathology losing access to care. Hopefully this is not the goal of CMS, and future iterations of this program should do a better job of taking case complexity into account.

Please read Dr. Bronson's article on this topic in the February 15 issue. Does this change how you view the role of bundled payments in spine surgery? Let us know by leaving a comment on The Spine Blog.

​At most institutions performing spine surgery, there is a friendly rivalry between orthopaedic spine surgeons and neurosurgeons about which service has better outcomes. In an attempt to answer this provocative question, Daniel Snyder and colleagues from New York evaluated both their institutional data and the NSQIP database looking at in-hospital complications following posterior cervical decompression and fusion (PCDF). They identified over 1,200 patients at their institution who underwent PCDF from 2006-2016 and over 11,000 PCDF patients in the NSQIP database from 2007-2015. About 56% of the patients at their institution were treated by a neurosurgeon compared to 78% in the national database. The cohorts were similar in baseline characteristics including age, gender, and ASA score (though some clinically insignificant but statistically significant differences did exist). In both the institutional and NSQIP cohorts, there was a high transfusion rate for orthopaedic surgeons (14.5% vs. 9.1% institutional and 11.2% vs. 6.2% NSQIP). When all in-hospital complications including transfusion were evaluated, there was no significant difference in the institutional data (19.5% ortho vs. 22.1% neuro), but the orthopaedic surgeon group had a higher overall rate of complications in the NSQIP database (18.1% vs. 14.0%). When controlling for age, gender, and ASA score, multivariate analysis revealed that orthopaedic surgeons had a 66% increased odds of any complication compared to neurosurgeons in the NSQIP database. The authors did not perform an analysis looking at overall complication rates excluding transfusion, though a quick calculation based on their tables yielded a 5.0% rate for orthopaedists and a 13.0% rate for neurosurgeons in their institutional data base and a 6.9% rate for orthopaedists and a 7.8% rate for neurosurgeons in the NSQIP database.

While the authors pose an interesting question, the limitations of their databases and methodology make it very difficult to draw any meaningful conclusions. Patients treated by orthopaedic spine surgeons had higher transfusion rates in both cohorts, though it is unclear if this was related to variation in intra-operative blood loss, differences in magnitude of surgery, or different thresholds for transfusion. The authors inability to control for magnitude of surgery is a major potential confounder, as it is possible that orthopaedic spine surgeons tended to include more levels in their fusions which would have increased both transfusion rates and overall complication rates. The authors also should have performed an analysis of overall complications excluding transfusion as this is generally a far less serious complication compared to others they evaluated (i.e. airway complications, MI, PE, sepsis, wound infection, and death). The most significant limitation of the paper is that the outcomes evaluated are of secondary importance compared to outcomes and complications that occur after discharge from the hospital. They did not look at rates of repeat surgery, surgical site infection diagnosed after discharge (by far more common than infection diagnosed during the index hospitalization), hardware complications, post-operative neurological deficits, pseudarthrosis, or patient reported outcomes, all of which are more relevant outcomes than transfusion rate or rates of rare inpatient complications. Other than a pneumonia rate of 4.25% for neurosurgeons in the institutional cohort, all other specific complication rates were under 2.5%. The authors also did not control for diagnosis (i.e. radiculopathy vs. myelopathy), and it is well-established that myelopathy patients have higher complication rates. As an orthopaedic spine surgeon I might be accused of being defensive, but I do not think any conclusions can be drawn from this study other than that orthopaedic spine surgeons had an overall higher transfusion rate for reasons that remain unknown. On a positive note, the authors did suggest that collaboration between orthopaedic spine surgeons and neurosurgeons might improve outcomes, and everyone would agree that a healthy relationship between the two services would be helpful to patients, providers, and their entire institution.

Please read this article in the February 1 issue. Does this change how you view the association between spine surgeon specialty and outcomes? Let us know by leaving a comment on The Spine Blog.

​The opioid epidemic has been widely studied, and many
narcotic addicts had their first exposure to opioids via a prescription for
pain medication. Low back pain patients are at especially high risk for opioid
prescription and subsequent addiction due to prescribing patterns and the
chronic nature of the problem. While patient characteristics associated with
long-term narcotic use have been studied extensively, the role of the specialty
of the initial provider encountered for low back pain in predicting long-term
use has been less well-studied. In order to evaluate the association between
initial provider specialty and likelihood of early and long-term narcotic
prescription, Azad and colleagues from Stanford University analyzed almost
500,000 initial low back pain encounters in the MarketScan Commercial Claims
database from 2010. All patients had an index low back pain or radiculopathy
diagnosis without a prior similar diagnosis or opioid prescription in the prior
6 months. They considered these patients as opioid-naïve and presenting with
acute back pain. Almost 50% had their initial encounter with a primary care
physician, 18% with a specialist, 18% in an acute care setting (i.e. urgent
care or the emergency department), and 17% with a non-physician provider (i.e. acupuncturist,
chiropractor, physical therapist, or physician’s assistant). Overall, 40% of
these patients received at least one prescription for opioids within 12 months
of the initial diagnosis, and 4% received at least 6 prescriptions in that
time-frame (10% of those receiving at least one prescription) and were
considered long-term opioid users. Not surprisingly, those in the acute care
environment were the most likely to provide an early opioid prescription within
2 weeks of diagnosis (40% acute care vs. 22% primary care). However, patients
initially seen by an acute care provider were actually at somewhat decreased
risk for long-term use (1% acute care vs. 2% primary care). Also not
surprisingly, patients initially seen by a pain management physician were at
highest risk for long-term use (7%). Patients initially seen by an
acupuncturist, chiropractor, or physical therapist were the least likely to
receive an opioid prescription at any point or be a long-term user.

The authors have asked an interesting question about the
role of the initial low back pain provider in setting the stage for opioid use.
This question differs subtly from the likelihood of prescription by provider
type as it attempts to look at the role of the first contact the patient has
with the healthcare system. While acute care providers are frequently
criticized for overprescribing opioids for back pain, it seems as though opioids
prescribed in this setting were not associated with long-term use. It is also
striking that simply having an initial encounter with a non-prescribing
provider (i.e. acupuncturist, chiropractor, physical therapist) seems
protective against receiving an opioid prescription even though many of these
patients likely encountered physicians later in their course. Like all large
database studies, the ability to draw strong conclusions about causation is
very limited. Many provider and patient characteristics that likely drove the
decision to prescribe opioids were not captured in the database and could be
strong confounders. For example, patients who were not interested in opioids or
traditional medical care were probably more likely to see a non-prescribing
provider initially, and this could have been the main driver of their lower
risk of opioid prescription than those who sought care from a pain management
provider. Additionally, the look back period was only 6 months, so some of
these patients may have had a significant history of chronic back pain and/or
narcotic use in the more distant past, and these patients may have been more
likely to seek out different types of providers. Given the publicity about the
opioid epidemic and publication of guidelines recommending against prescribing
narcotics for low back pain, it would be interesting to see how patterns have
changed recently compared to 2010. Avoiding the prescription of narcotics and
considering alternative pain management strategies (i.e. physical therapy,
chiropractic care, acupuncture) seems to be a reasonable approach to the
management of acute low back pain.
Please read this article in the February 1 issue. Does this change how you
consider the role of the initial provider in the treatment of acute low back
pain? Let us know by leaving a comment on The Spine Blog.

​The role of indolent infection in orthopaedic surgery is becoming better understood, especially in the shoulder surgery and total joint replacement literature. Many failed shoulder and joint replacement cases that present without the classic signs and symptoms of infection are found to be due to indolent infection after intra-operative cultures are obtained. The role of indolent infection in failed spine surgery is less well-defined, though there has been significant study of the role of P. acnes in disk degeneration. Given that indolent infection is a relatively common cause of failure in orthopaedic surgery on the extremities, it likely also plays a role in spine surgery. In order to better assess this, Dr. Steinhaus and colleagues at the Hospital for Special Surgery reviewed nearly 600 consecutive revision spine cases to determine which factors affected the decision to obtain intra-operative cultures and which characteristics were associated with infection. They removed the 17 cases in which infection was expected pre-operatively, which yielded 578 presumed aseptic cases. Cultures were obtained in 112 (19.4%) of these cases, and multivariate analysis demonstrated that obesity, thoracolumbar fusion, pseudarthrosis, implant failure, and the presence of instrumentation were independent predictors of cultures being obtained. Forty percent of cultured cases had positive cultures, with Staph species and P. acnes being the most common organisms. Multivariate analysis showed that male gender (OR 3.4) and pseudarthrosis (OR 4.1) predicted positive cultures, while having undergone a fusion (OR 0.3) decreased the risk of infection. Typical risk factors for infection such as obesity, diabetes, malnutrition, and smoking were not associated with positive cultures.

The authors have done a nice job looking at an issue spine surgeons frequently encounter but about which there is very little evidence to guide their decision-making. While the literature is replete with papers on risk factors for infection, very little has been published about revision cases that are presumed to be aseptic but in fact represent indolent infection. While the authors assembled a large series of nearly 600 revision cases, only 45 had positive cultures, which significantly limited their power to perform multivariate analysis looking at risk factors. It is unclear if the lack of association between traditional risk factors for infection and positive cultures found in the study was due to lack of power or because these are not risk factors for indolent infection. Given the well-established association between infection and pseudarthrosis in orthopaedic trauma surgery, many spine surgeons, including myself, routinely culture all pseudarthrosis cases. The surgeons in this series cultured 43% of the pseudarthrosis cases and 50% of the hardware failure cases, demonstrating that these diagnoses increased their rate of obtaining cultures, yet this was not a universal practice. Given the retrospective nature of the study, it is not possible to know why they obtained cultures in some of these cases but not others. The surgeons did not obtain cultures in all of the revision cases, which makes the analysis of risk factors hard to interpret due to likely selection bias. It would also be helpful to know if the surgeons were using intra-wound vancomycin powder at any point during the study as that would likely affect the infection rate and type of organisms encountered. While this paper has all of the limitations of a retrospective study, it has done a very nice job demonstrating that indolent infection is not uncommon in pseudarthrosis cases, and it is likely prudent to culture all revision cases done for this diagnosis.

Please read Dr. Steinhaus's paper on this topic in the February 1 issue. Does this change how you consider the role of intra-operative cultures in cases that are presumed to be aseptic? Let us know by leaving a comment on The Spine Blog.

While the role of fusion in the surgical treatment of
degenerative spondylolisthesis remains controversial, most spine surgeons agree
that a solid fusion is preferable to a pseudarthrosis.1-3
Investigators have demonstrated that pedicle screw instrumentation, iliac crest
bone graft, and BMP-2 increase fusion rates. Unfortunately, these are all
associated with increased morbidity, increased cost, or both. As such,
researchers have made an effort to identify lower risk alternatives to increase
fusion rates. Teriparatide (synthetic parathyroid hormone, marketed as Forteo)
has shown to be a promising medication that has improved fusion rate and
quality in animal models. A non-randomized Japanese study compared fusion rate
in osteoporotic women with degenerative spondylolisthesis treated with either teriparatide
or risedronate around the time of lumbar laminectomy and instrumented fusion
with local bone graft. The patients received the medication for two months
pre-operatively and for eight months post-operatively. There was no control
group that received placebo or no medication. They found that the 12-month
fusion rate as determined by CT scan was higher in the teriparatide group (82%
vs. 68%). Given these promising findings, Dr. Jespersen and colleagues from
Denmark performed an RCT in which 101 degenerative spondylolisthesis patients
over 60 years old undergoing one or two level decompression and uninstrumented fusion
using local bone graft and allograft were randomized to 90 days of teriparatide
or placebo. At one year, all patients underwent a CT scan to determine fusion
rate, fusion mass volume, and fusion mass density. Overall, the fusion rate was
33%, and there were no significant differences between the two groups (29%
teriparatide vs. 37% placebo). There were no differences in fusion mass volume
or density. Based on these findings, the authors concluded that 90 days of
teriparatide did not change the fusion rate or quality in this population.

The authors should be congratulated on successfully
performing a Level 1 study that was well-designed to answer a specific clinical
question. While RCTs provide the highest level evidence, they can only answer
one specific question. Teriparatide seemed promising in animal models, and it
may be helpful in different scenarios than the one studied here. This
investigation looked at a specific dose, duration, and fusion technique, and it
is reasonable to conclude that teriparatide was not helpful in this specific
situation. It is possible that longer duration therapy, starting it months
pre-operatively, using it with instrumented fusion, or using it strictly in a
population of osteoporotic women would result in a different outcome, though
these specific scenarios would need to be studied to answer the question. One
of the striking findings of this study is that the overall fusion rate was only
33% with an uninstrumented fusion using local bone graft and allograft. Patient
reported outcomes were not included in this study, but there is some evidence
suggesting that long-term outcomes are worse in uninstrumented fusion patients
who go onto nonunion.3
Given that many degenerative spondylolisthesis patients do well without fusion,
it may be that nonunion does not have a markedly negative impact on their
outcomes. It will be interesting to see the patient reported outcomes in this
study population to determine if the patients with nonunion have worse outcomes.
The role of teriparatide in lumbar fusion remains unclear, though this study
makes it clear that it is not beneficial at this dose and for this duration in
this population. The bigger question about the best surgical technique for
degenerative spondylolisthesis patients remains unanswered. It seems likely
that degenerative spondylolisthesis represents a disease spectrum and that
patients with different characteristics do best with different operations,
though how to determine the best operation for an individual patient remains
unknown.
Please read Dr. Jespersen’s article on this topic in the February 1 issue. Does
this change your view of teriparatide in lumbar fusion? Let us know by leaving
a comment on The Spine Blog.

​Qualitative research is frequently used in the social sciences but is rarely encountered in the spine surgery literature. Much has been written about the use of shared decision making to aid patients deciding about whether to undergo spine surgery, though there is scant literature about what the patient experiences during this process. This topic does not lend itself to traditional quantitative methods using patient reported outcomes and requires a qualitative approach. Dr. Andersen and colleagues from Denmark designed a qualitative study to evaluate patient perceptions regarding the decision to undergo lumbar discectomy. They interviewed 14 patients presenting with radiculopathy in the presence of an MRI-confirmed lumbar disk herniation to determine which factors affected their decision-making. Nine of the fourteen also underwent a second interview 1-2 months later. The interviewers asked them open-ended questions about the decision-making process and recorded the discussions. They then coded the patient statements according to themes, and the group arrived at four main themes that they observed across the interviews. The major factors that affected decision-making and how the patients experienced the process were the level of patient information, the effect of accelerated workflows, the power imbalance between clinicians and patients, and the patients' personal experiences with acquaintances who had been treated for a lumbar disk herniation. The investigators found that patients frequently had misinformation prior to meeting with a spine surgeon, and this misinformation affected their decision-making. Patients also reported feeling rushed through the process, which led some to decide to go ahead with surgery without feeling as though they had sufficient time to make the decision. Many patients reported feeling as though they would defer to the recommendation of the surgeon as they saw the surgeon as the expert whose opinion was more important than their personal preferences. Some based their opinion about discectomy on the spine surgery experiences of others they knew, which could be either positive or negative.

The authors have done a very nice job performing a qualitative analysis regarding the patient experience during decision-making around lumbar discectomy. Such studies are not common in the spine literature, and this type of analysis is key to getting at topics such as this. A traditional quantitative analysis using measures of decisional conflict and satisfaction with decision-making would have lost the meaningful information that can only be captured through interviewing. The results of this study are not surprising and are in-line with many studies looking at shared decision-making. The challenging aspect of this type of study is that most of the factors that made decision-making difficult were outside of the control of the clinicians. The misinformation that patients had prior to the spine consultation tended to be from the internet, non-spine clinicians, and other patients. Surgeons are familiar with correcting false information with patients, and this is a difficult, time intensive process. In reading the comments from the patients about power imbalance, it suggests that some patients do not necessarily see this as a problems but simply as the reality of the situation. They view clinicians as experts and seem happy to follow their advice. Many patients felt pressured to make a decision about surgery quickly, though this was indirect and more related to the scheduling process than actual pressure applied by the surgeons. Qualitative research is not well-understood by the spine community (or myself for that matter), and it seems to have a high risk of bias as the researchers determine the themes. This process is clearly shaped by their beliefs, and it is hard to know if the comments determined the themes or if the researchers had preconceived notions about the themes and found comments to support these categories. Despite these limitations, a qualitative design is likely the only way to study this topic. The results suggest that we still have a long way to go to reach truly shared decision making.

Please read Dr. Andersen's article on this topic in the January 15 issue. Does this change how you view the patient experience in deciding about a lumbar discectomy? Let us know by leaving a comment on The Spine Blog.

​As 2018 comes to a close, it offers an opportunity to reflect on some of the important issues facing spine care providers and researchers. In reviewing The Spine Blog topics for the year, a few important themes stood out. The role of fusion and optimal fusion technique for degenerative spondylolisthesis (DS) patients remains controversial since the 2016 publication of two RCTs reaching opposite conclusions about the benefit of fusion in DS. In July, Vail and colleagues published the results of a large database study that demonstrated patients undergoing laminectomy and fusion had higher initial costs and complication rates compared to the laminectomy alone patients, but the laminectomy alone patients also had higher post-discharge costs and a higher reoperation rate. Interestingly, the uninstrumented fusion patients had the highest rate of post-operative complications. The Spine Patient Outcomes Research Trial (SPORT) 8 year DS study was also published. This paper completed the long-term follow-up series for the three diagnostic groups studied by SPORT and may represent the last paper from that project. It demonstrated the persistent benefit of surgery compared to non-operative treatment for DS patients out to 8 years. In a subgroup analysis, it also showed no difference in outcomes between patients treated with a posterolateral instrumented fusion, an interbody fusion, or an uninstrumented fusion, though there was no randomization for this aspect of the study. There were not enough patients treated with laminectomy alone to compare their outcomes to those treated with laminectomy and fusion. Two-thirds of respondents to the quick poll on the Spine website favored laminectomy and fusion while one-third favored laminectomy alone for a patient with stenosis and a stable degenerative spondylolisthesis. While the evidence supporting surgical treatment for DS is now quite strong, the best operation for the condition remains undefined and likely varies depending on individual patient characteristics.

Another controversial topic that came to light after a December 2015 article in The Boston Globe is concurrent surgery, in which a surgeon runs two operating rooms at a time in order to increase efficiency. While the practice was not uncommon prior to the article, it was rarely discussed and was probably not well-understood by patients. The article brought the issue into the spotlight and resulted in Senate hearings and policy changes in hospitals across the country. In order to better understand how surgeons see the topic, Dr. Laratta and colleagues published the results of a survey asking spine surgeons to define the "critical" aspects of spine surgery, or, in other words, the steps for which the attending surgeon needs to be present. The majority felt that decompression, fusion, and instrumentation were all "critical" steps, while positioning, opening, and closing were not. Dr. Bryant and colleagues from UCSF surveyed parents of patients undergoing surgery for adolescent idiopathic scoliosis, and there was strong agreement that patients should be informed of overlapping or concurrent surgery and that they would not want their child undergoing surgery by a surgeon running two rooms. The quick poll on this topic on the Spine website also indicated that readers felt that concurrent surgery was not acceptable.

On a less controversial front, multiple articles focused on the use of pharmacologic agents in spine surgery in order to reduce blood loss. A February article by Dr. Nagabhushan and colleagues showed that TXA and batroxobin both reduced blood loss compared to placebo. Dr. Lu and colleagues published a meta-analysis evaluating TXA and aminocaproic acid that showed that these significantly reduced blood loss and reduced the transfusion rate by 40%. Spine surgeons have widely adopted these agents for major deformity surgery, but the indications for their use in smaller magnitude surgery has yet to be defined. Quick poll respondents reported that they tend to use these agents for major deformity surgery and multiple level fusions, and some also used them for single level fusion. These medications are now widely used in total joint arthroplasty and cardiac surgery. The spine surgery community will have to determine for which cases they are indicated.

Spine research continues to provide answers to important clinical questions, though it seems that there remain more questions than answers in the spine world. Hopefully we will continue to see high impact research published in Spine in 2019. Happy New Year from The Spine Blog!

​Overlapping and concurrent surgery was commonly performed by surgeons but not widely understood by patients until a much-discussed Boston Globe article on the topic was published a few years ago. The article highlighted the case of a patient who sustained a neurological injury during complex cervical spine surgery on a day on which the attending surgeon was running two operating rooms. This article raised public awareness on the topic and resulted in guidelines being published by the American College of Surgeons and the development of policies regarding the practice at most hospitals. One of the major concerns with overlapping and concurrent surgery was that patients were not adequately informed about the practice and this violated their informed consent. In order to get a better sense about patient perception of overlapping and concurrent surgery, Dr. Bryant and colleagues from UCSF surveyed the parents of 31 adolescent idiopathic scoliosis (AIS) patients undergoing posterior instrumented fusion. They defined overlapping surgery as cases where "non-critical" (i.e. opening, closing, positioning) portions overlapped and concurrent surgery as those where "critical" (i.e. pedicle screw placement, correction) overlapped. Every family approached about the survey agreed to participate, yielding a 100% response rate. Sixty-one percent of the respondents were mothers, 78% had at least a college degree, and 82% had an annual family income in excess of $100,000, indicating this was a relatively wealthy, educated cohort. Essentially all of the respondents strongly agreed that they should be informed of overlapping or concurrent surgery. The group strongly disagreed that concurrent surgery was acceptable, and they felt almost as strongly about overlapping surgery. Offering the availability of a "back-up" attending surgeon or informing the parents of research reporting that overlapping and concurrent surgery was not associated with adverse outcomes did not significantly change their opinions about the practice. The parents also agreed that they would cancel surgery on the day of surgery if they were informed that the case would be overlapping or concurrent, and they were willing to pay a premium to avoid overlapping or concurrent surgery for their child. They also felt that trainees should not perform "critical" portions of the case even when supervised and that "non-critical" portions of the case should be supervised by the attending. They had similar feelings about anesthesia providers.

This article does a nice job pointing out the disconnect between surgical practice and patient preferences. While overlapping or concurrent surgery may not be common in AIS cases, it likely occurs in busy centers. Additionally, the parents felt strongly that trainees should not be performing "critical" portions of the case even under direct supervision, and that they should not be opening or closing without attending supervision. In most AIS cases performed at academic institutions, trainees do place pedicle screws under direct supervision and also close wounds without attending supervision. It may be that respondents would have different opinions on these topics if asked about spine surgery on themselves, and their responses about their children's surgery may be more conservative. There is a constant tension between providing the best, safest care possible, the need to train the next generation of surgeons, and a desire to use the operating room in the most efficient way possible. Little to no evidence exists to suggest that the involvement of properly supervised or even unsupervised trainees results in worse outcomes. However, parents of AIS patients clearly prefer to minimize the involvement of trainees. They are also clear in their rejection of the concept of overlapping and concurrent surgery. While adult patients may be more willing to accept trainee involvement or overlapping surgery for themselves, it seems highly unlikely that any patient would be enthusiastic about undergoing concurrent surgery. The American College of Surgeons, CMS, and most hospitals have guidelines against concurrent surgery, but the practice persists. Given the disconnect between current surgical practice and patient preferences, work needs to be done on hospital policy and patient education to get all parties on the same page. While this may come at the cost of resident/fellow education and attending surgeon compensation, surgeons have an ethical duty to inform patients of their practice and let them decide for themselves about whether or not they want to have overlapping or concurrent surgery or resident involvement in their case.

Please read Dr. Bryant's article on this topic in the January 1 issue. What are your thoughts on overlapping and concurrent surgery? Let us know by leaving a comment on The Spine Blog.

​People love lists of the best and most popular, especially at the end of the year, so it is fitting that Dr. Badhiwala and colleagues from Canada published their list of the Top 100 most cited papers published in spine journals in the December 15 issue. Similar lists have been generated over the years, though this one looked specifically at journals with the word "spine" or "spinal" in the title, and excluded works published in more general medical (i.e. New England Journal of Medicine) or orthopaedic (i.e. Journal of Bone and Joint Surgery) journals. Of the top 100, 84 were published in Spine, with the European Spine Journal finishing a distant second with 7 of the top 100 most cited articles. Seventy-three of the articles were published between 1990 and 2004, with only 2 having been published after 2010. This demonstrates that articles must be present in the literature for many years in order to generate high numbers of citations. The fact that fewer papers on the list predate 1990 is likely due to the high rate of growth of the number of publications—and thus citations—after this point in time. Not surprisingly, three of the top 5 most cited articles describe outcome measures (i.e. Roland-Morris Disability Questionnaire, Oswestry Disability Index, and SF-36), and these papers are generally cited in any subsequent study using the outcome measure. Guidelines for conducting research and clinical practice guidelines are also widely cited. Somewhat surprisingly, 12 of the top 100 cited papers are lab studies (i.e. biomechanics, basic science of pain mediators), which rarely result in any change in clinical practice. In terms of subject matter, 22 of the top 100 dealt with low back pain, and an additional 12 focused on degenerative disk disease. There were 13 biomechanics papers in the top 100. The authors noted the paucity of cervical spine papers in the top 100, and there were only 5 cervical papers on the list.

The studies on this list come as no surprise. Classic studies describing specific patient reported outcome measures, research and clinical practice guidelines, classification systems, and surgical techniques made up the majority of the list. There are some notable studies that did not make the list, including the Maine Lumbar Spine Study, the Spine Patient Outcomes Research, and Herkowitz's classic study comparing laminectomy and fusion to laminectomy alone for degenerative spondylolisthesis. Many of these studies were published in general medical (NEJM, JAMA) or orthopaedics journals (JBJS), so they were not included. The 4 and 8 year follow-up studies from SPORT were published relatively recently, and there has not been sufficient time for them to accrue high numbers of citations. Somewhat disappointingly, only 8 of the top 100 papers were RCTs. This points out the difficulty of carrying out Level 1 studies on spine topics, and hopefully more will be forthcoming in the future. Many of the RCTs were also published in non-spine journals. This list serves as a jumping off point for creating bibliographies and reading lists of classic spine articles. The authors indicated they would be publishing a companion study that would include the non-spine specific journals, and such a study would make the list more complete. Please read Dr. Badhiwala's article in the December 15 issue. Are you surprised by any of the papers on this list or the papers missing from it? Let us know by leaving a comment on The Spine Blog.

​If spine patients and their surgeons could see the future, surgical decision-making would be very easy. The patient would know how they would do with surgical or non-operative care and then choose the option that leads to their preferred result. Unfortunately, the science of predicting spine patient outcomes based on individual characteristics is in its infancy. In an effort to individualize outcome predictions for patients with disk herniation, spinal stenosis, and degenerative spondylolisthesis, Haley Moulton and colleagues from Dartmouth-Hitchcock, Washington University, and OrthoCarolina worked with Consumer Reports using data from the Spine Patient Outcomes Research Trial (SPORT) to develop outcomes prediction models for surgical and non-operative treatment for each diagnosis. The outcomes included were the SF-36 physical function score, sciatica or stenosis bothersomeness index, sleep quality, and sex life. Consumer Reports worked with investigators to create a user-friendly website for patients that provided information about their condition and an individualized prediction of surgical and non-operative outcomes out to 8 years based on their characteristics. Two groups of respondents were queried:over 1,200 Consumer Reports subscribers known to have low back pain and 68 patients identified from spine center clinics. The Consumer Reports patients tended to be older and included a higher proportion of men with longer term, less severe symptoms as compared to those recruited from clinic. The Consumer Reports respondents were also not screened by a spine clinic provider, so they determined their own diagnostic category. The study participants were randomly assigned to take a knowledge quiz before or after using the website, and the group who took the quiz after using the website scored higher. Decisional conflict tended to be lower after using the website, though this was more pronounced for the Consumer Reports respondents. Overall, the participants found the calculator at least moderately useful, and the majority were very or completely satisfied with its ease of use.

To many spine providers, having an accurate way to predict surgical and non-operative outcomes for individual patients is the holy grail of surgical decision-making. It would allow for the selection of surgical patients who would have a high chance of surgical success and the avoidance of surgery in patients who would do just as well with non-operative treatment or be at high risk for a bad complication. A perfectly accurate prediction model will never exist, but an outcomes calculator like the one described here should help patients and surgeons make a more informed decision. This paper would have been stronger if more patients who were actual surgical candidates were enrolled. Only about 20% of patients identified in clinic and provided with a link to the website actually used it, suggesting that most patients were not particularly interested in using the program. It is possible that handing them a card with a web address they needed to type into their computer or phone was not the best way to recruit them into the study. It is harder to interpret the data from the Consumer Reports respondents as they self-identified as having one of the three conditions under study, and many of them were probably not surgical candidates (their patient reported outcome scores indicated much less severe symptoms compared to the clinic patients). It seems likely that prediction models will appeal to at least a substantial minority of relatively sophisticated patients who have an interest in this type of data. Other patients may prefer a simple treatment recommendation from their provider. In an ideal world, patient outcomes would be recorded in a registry, and the model's accuracy could be improved as more patients are followed. Individualized outcomes prediction represents a major addition to shared decision, and hopefully this information will help patients and surgeons come to the right treatment decision.

Please read Ms. Moulton's article on this topic. Do you think such a calculator would be helpful to you and your patients? Let us know by leaving a comment on The Spine Blog.

​Cervical spondylotic myelopathy (CSM) is a relatively common cause of balance dysfunction and loss of manual dexterity in the middle aged and elderly population. Like many spinal conditions, there is not a gold-standard test available to make the diagnosis. The combination of symptoms (i.e. clumsy hands, off balance), physical exam findings (hyperreflexia, Hoffman's sign, clonus, etc.), and MRI demonstrating cord compression are necessary to make the diagnosis. For patients with severe CSM, the diagnosis is frequently straightforward, though mild or atypical cases can be much harder to diagnose. Given the lack of a gold-standard test, providers would benefit from a better understanding of the test characteristics of factors that go into the diagnosis. To quantify the test characteristics of the physical exam maneuvers that contribute to the diagnosing CSM, Dr. Fogarty and colleagues performed a literature review and meta-analysis on the topic. The only physical exam test with any even moderate quality data regarding test characteristics was the Hoffman sign, so their paper focused on this test. They found three papers including patients referred to a spine surgeon for cervical complaints that reported the sensitivity and specificity of the Hoffman sign, using MRI as the "gold standard" diagnostic tool. The authors combined the 3 studies to yield 201 patients, 46% of whom had an MRI "diagnosis" of myelopathy. Overall, the Hoffman sign had a sensitivity of 59% and specificity of 78%, corresponding to a false negative rate of 41% and a false positive rate of 22%. The positive likelihood ratio (proportion of CSM patients with a positive Hoffman sign divided by non-CSM patients with a positive Hoffman sign) was 2.6 and the negative likelihood ratio (proportion of CSM patients with a negative Hoffman sign divided by non-CSM patients with a negative Hoffman sign) was 0.5. Based on these findings, they concluded that a positive Hoffman sign slightly increased the likelihood of having CSM, while a negative Hoffman sign did not significantly alter the pre-test probability.

The authors have done a nice job synthesizing and quantifying the limited available data on this topic. Spine providers are clinically aware of the relatively low sensitivity and specificity of the Hoffman sign and most other physical exam findings that contribute to the diagnosis of CSM. We have all seen many CSM patients without a Hoffman sign and plenty of non-myelopathic patients who have a positive Hoffman sign. One of the major limitations of any study on this topic is the lack of a gold-standard to diagnose CSM. While the papers used varying MRI findings as the "gold-standard", this is problematic as many patients can have spinal cord compression and even signal change in the cord without having clinically relevant myelopathy. This may be why there was a 46% prevalence of CSM, which seems very high for a typical spine surgery practice. The authors reported a false negative rate of 41%, which may be artificially elevated given a radiographic diagnosis of CSM (i.e. patients with cord compression without clinically evident myelopathy would not have a positive Hoffman sign yet would be characterized as a false negative). This article helps to hammer home the point that CSM is a clinical diagnosis based on the provider's gestalt after considering findings from the history, physical exam, and imaging. The authors mention doing a prospective study of patients presenting with neck pain who undergo a physical exam and MRI, but such a study would be difficult to carry out. For one, the lack of a gold-standard diagnostic test precludes accurate calculation of test characteristics. Future studies probably need to rely on an expert's clinical impression as the diagnostic gold-standard. Additionally, it would be very difficult to obtain insurance approval for an MRI in a patient presenting only with axial neck pain. A useful exercise in the future may be to create a diagnostic prediction algorithm that includes findings from the history, physical exam, and imaging findings. It is unlikely that such an algorithm would outperform an experienced expert, but it might be of use for generalists or less experienced spine providers.

Please read Dr. Fogarty's article on this topic in the December 1 issue. Does this change how you view the role of the Hoffman sign in the diagnosis of CSM? Let us know by leaving a comment on The Spine Blog.

​Long-term outcome data following spine surgery are hard to come by, and long-term non-operative outcomes are virtually non-existent. The Spine Patient Outcomes Research Trial (SPORT) received NIH funding for eight years of follow-up for degenerative spondylolisthesis (DS) patients treated with surgery or non-operative treatment. Over 600 patients enrolled, and approximately half agreed to be randomized to surgery or non-operative care. Twenty-eight percent of patients randomized to surgery did not undergo surgery, while 54% of those randomized to non-operative care did have surgery. This high level of treatment non-adherence prevented meaningful analysis of the RCT on an intent-to-treat basis. As such, the randomized and observational cohorts were combined in an as-treated analysis that was statistically controlled for potential confounders. The follow-up rate at 8 years was 56%, and this loss to follow-up had some potential to bias the results. In the as-treated analysis, surgery had a significant advantage compared to non-operative treatment, and this difference remained significant out to eight years. The surgery patients improved approximately 10 points more on the Oswestry Disability Index at 8 years. Similar differences were observed on the SF-36 and other outcome measures. In a subgroup analysis, there were effectively no significant outcome differences among those treated with an uninstrumented fusion, pedicle screw instrumentation, and pedicle screws plus an interbody device. At eight years, the reoperation rate was 22% and did not differ significantly across fusion techniques.

The SPORT produced some of the highest quality data available to the spine community. Despite this, significant limitations such as crossover and loss to follow-up have potentially biased the results. Nonetheless, the data are consistent across eight years of study and seem to match clinical experience. There has been significant controversy about the best surgical technique to treat DS, with ongoing debate about the most effective type of fusion1 as well as about whether fusion is even necessary.2,3 The results of the current study do not offer much new information on the topic and are limited by the fact that patients were not randomized to different fusion technique. As a result, the patients were significantly different at baseline, and, despite controlling for these differences, it is hard to know if confounders affected the outcome. A prior observational study suggested the results of uninstrumented fusion could degrade over time due to a high rate of pseudarthrosis, though this was not observed in SPORT.4 Fusion technique did not affect reoperation rate either. While the different techniques have specific advantages and disadvantages, these do not seem to affect long-term patient reported outcomes or reoperation rate. Similar to the long-term SPORT studies on spinal stenosis and disk herniation, the current study demonstrated the long-term advantage of surgery compared to non-operative treatment for DS. The best operation for DS remains unknown, and it likely depends on patient and disease characteristics. Now that most would agree that surgery leads to better long-term outcomes than non-operative treatment, hopefully future studies can help surgeons select the best surgical technique for individual patients.

Please read Dr. Abdu's article on this topic in the December 1 issue. Does this article change how you consider long-term outcomes for DS patients? Let us know by leaving a comment on The Spine Blog.

​Spine surgery is technically challenging, and errors can result in complications that adversely affect patients both in the short and long term. As such, it is a difficult to teach surgical trainees safely. In the traditional apprenticeship model, trainees gradually perform an increasing number of steps in the procedure under the supervision of the attending surgeon until they are proficient and can perform the operation independently. This advancement of responsibility balances the goals of improving trainee competence and of avoiding complications due to trainee error. If these competing goals are not well-balanced, residents and fellows may not be proficient by the end of their training or patients may experience unnecessary harm. Surgical simulation offers the promise of providing risk free training for surgical residents and fellows, and it has been demonstrated to improve skills, particularly in endoscopic surgery (i.e. laparascopic cholecystectomy, knee arthroscopy). The development of high fidelity open surgical simulators has been difficult. Options include using synthetic models, animals, cadavers, and virtual reality. Unfortunately, all of these tend to be expensive and frequently lack sufficient fidelity to allow for skill transfer to real surgery. Until now, there has been scant literature describing spine surgery simulation. Drs. Coelho and Defino, from Brasil, attempted to fill this void with their article in the November 15 issue. Their study describes both a synthetic, manufactured model and a virtual reality simulator. Following development of these simulators, they asked 16 experienced spinal surgeons to evaluate them. The synthetic model was designed to represent the lumbar spine and included a soft tissue envelope, discoligamentous spine model, and thecal sac including nerve roots and saline representing CSF. This allowed for simulation of spinal exposure, pedicle screw placement, laminectomy, and durotomy repair. The virtual reality simulator was not described in great detail, but it reportedly simulated similar steps. The paper contains limited data, but they did report that 11 of 16 surgeons felt that the synthetic simulator had the potential to have a practical application in training. Even less data were reported about the virtual simulator, though all but one surgeon who tested it felt like it might have some role in spine surgery training.

The authors should be congratulated for creating spine surgery simulators, and a high fidelity simulator that allows trainees to progress along the early portion of the learning curve in a safe environment is very much needed. This paper basically served to describe their models and indicated that experienced surgeons felt they had a potential role in surgical training. A scientific paper is not the best medium to convey the details about a surgical simulator, and video or live demonstration of the simulators would be necessary to completely grasp how they function. It would be helpful to know the cost of the two simulators and the reusability of the synthetic simulator, but the authors did not report this. I imagine that the next step will be a description of how trainees perform on the simulators and then a study looking at whether using the simulators results in measurable performance improvement in the OR. I have found that using Sawbones models to train residents to place spinal hardware helps them understand the steps and gain familiarity with the instrumentation, though the model lacks fidelity and only allows progress along the very early portion of the learning curve. Virtual reality holds the promise of realistic simulated experiences including haptic feedback, though the technology does not currently exist to provide high fidelity simulation of open surgical procedures.

Please read Dr. Coelho's article in the November 15 issue. Does this change how you view the role of surgical simulation in spine surgery training? Let us know by leaving a comment on The Spine Blog.

​S2 alar-iliac (S2AI) screws likely represent the best option for spinopelvic fixation, though their placement can be technically difficult. There is a lack of good anatomic and fluoroscopic landmarks to guide their placement, and determining if the screw is entirely within bone using fluoroscopy can be difficult. Navigation and robotic drill guide placement are newer techniques that hold promise to improve the accuracy of screw placement in spine surgery. In order to determine if a robot improves the accuracy of S2AI screw placement, investigators at Columbia University compared S2AI screw accuracy between 59 screws placed using a free hand (i.e. no image guidance) and 46 screws placed using a robot-positioned drill guide. They found an 8.5% breach rate using the free hand technique and a 4.3% breach rate using the robot, though this difference was not statistically significant. Moderate-severe breaches (>3 mm) occurred in 5.1% of free hand screws and 2.2% of robot-directed screws. They also compared the trajectory of the screws and found that the robot-directed screws aimed somewhat more caudally. No screws caused neurovascular or visceral injury, and there were no inferior breaches into the sciatic notch. Based on these data, the authors concluded that the free hand and robot-directed technique had comparable results.

The authors have done a nice job creating a comparative study looking at these two S2AI techniques. While the authors work at a busy spinal deformity center, they included only 51 patients who underwent surgery over two years. This indicates that it is very difficult to put together a large series of these cases as even busy deformity centers are not putting in large numbers of S2AI screws. The relatively low numbers involved and relatively low breach rate resulted in an underpowered study. The free hand technique had twice the breach rate as compared to the robot-directed technique, yet this difference was not close to statistically significant. Based on a lack of statistical significance, the authors concluded that accuracy was similar for both techniques, yet the data indicate that the robot might cut the breach rate in half. The only way to prove that would be to do a large, likely multicenter trial, which is always challenging and expensive. The data suggest that one of the robot-directed screws was markedly off target, likely by a centimeter or more. It would be interesting to know the mechanism of failure for that screw given that the others seemed to be placed quite accurately. Stereotactic navigation is another option for placing S2AI screws, and the authors did not comment on that. In my experience, it can be difficult for the cameras to capture the stereotactic array on the pedicle finder due to the caudal angulation required to place the screw. With all of the new technology related to placing screws more accurately, it seems as though the placement of S2AI screws can be made easier and more accurate by using the technology. It remains to be seen what technique (free hand, fluoroscopy guided, navigation, 3D printed drill guides, robots, etc.) will lead to the most accurate placement while maintaining efficient workflow. It seems unlikely that most spine surgeons will ever be able to match Dr. Lenke's skill with free hand screw placement, and even he had a 5% moderate-severe breach rate. Navigation and robotics seem to be here to stay, and it will be up to the spine surgery community to figure out how to best use these tools.

Please read Dr. Shillingford's article on this topic in the November 1 issue. Does this change how you view the role of robotics in S2AI screw placement? Let us know by leaving a comment on The Spine Blog.

Multiple studies have evaluated risk factors for recurrent disk herniation following lumbar discectomy, oftentimes with conflicting results. Younger age, male gender, smoking, occupational lifting, large annular defect, and obesity have been cited as risk factors, though the data are inconsistent. In an effort to understand both patient and biomechanical risk factors for reherniation, Dr. Li and colleagues from China analyzed a cohort of 321 patients undergoing single level discectomy. These patients were followed for a minimum of 6 years, with mean follow-up approximately 8 years. Over the course of follow-up, 58 (18%) experienced a recurrent disk herniation, defined as a herniation on the same or contralateral side at the same level as the index surgery resulting in a recurrence of pain after a minimum of six months of pain relief. They performed both univariate and multivariate analyses to determine risk factors for recurrence. Male gender, younger age, smoking, occupational lifting, undergoing a bilateral laminectomy, and transligamentous extrusion were the strongest predictors of reherniation. Radiographic factors including increased disk height, increased sagittal plane range of motion on flexion-extension radiograhs, more sagittal facet orientation, and facet asymmetry were also associated with reherniation. In multivariate logistic regression, only age and facet asymmetry remained as significant independent predictors of reherniation.

The authors have done a nice job analyzing an exhaustive list of risk factors for recurrent lumbar HNP. Their results were generally consistent with the literature as they found that younger age, male gender, smoking, occupational lifting, and obesity were all risk factors for reherniation, and these factors have been reported as risk factors in earlier studies. Their finding that asymmetric facet orientation was an independent risk factor for reherniation was novel. It can be difficult to interpret studies like this where a large number of factors are found to be associated with an outcome in univariate analysis (in this study 11 factors were significantly related to reherniation), yet few (in this case, 2 factors) remain associated in the multivariate analysis. One possibility is that many of the factors are related to each other (i.e. young age, occupational lifting, disk height, and sagittal ROM are likely all related), and thus not independent predictors. Another possibility is that the study is underpowered to look at a high number of risk factors, a likely scenario in this case (only 58 reherniation patients with 19 risk factors analyzed). A number of risk factors for reherniation have been identified by multiple studies, and it raises the question of what to do with this knowledge. Some factors such as obesity, smoking, and occupational lifting are theoretically modifiable, though most of the factors are not. If sufficient data could be obtained to create an accurate predictive model, patients could be advised about their risk of reherniation. The only strategy to reliably eliminate this risk would be to perform a fusion along with diskectomy, which would create other problems such as adjacent segment degeneration. It seems as though the predicted reherniation rate would need to be very high to justify peforming a fusion, and it is possible that such an approach would never be indicated. The best we can do is probably to advise patients to lose weight and stop smoking, though most spine surgeons are aware that such advice frequently goes unheeded.

Please read the article by Dr. Li in the November 1 issue. Does this change how you view risk factors for recurrent lumbar disk herniation? Let us know by leaving a comment on The Spine Blog.

​The adult deformity literature has long-focused on compensatory mechanisms for the loss of lumbar lordosis and sagittal imbalance. These mechanisms typically include pelvic retroversion and flexion at the hips and knees.These changes help to keep the head centered over the pelvis.Spinal deformity surgeons have also noted a tendency for thoracic hypokyphosis in patients with sagittal imbalance, though this mechanism has rarely been discussed in the literature.Patients can effectively have a panspinal flat back deformity with loss of lumbar lordosis and decreased thoracic kyphosis yet maintain their sagittal vertical axis in the normal range.In order to better understand this topic, Dr. Protopsaltis and colleagues in the International Spine Study Group evaluated 219 adult deformity patients in their registry who had undergone corrective surgery with fusion from the lower thoracic spine to the pelvis.The mean age was 62, and approximately 70% were females.The majority were undergoing revision surgery, and nearly 50% underwent a 3 column osteotomy.The authors followed the patients out to 1 year with full–length radiographs and patient reported outcome measures.They divided the patients into two groups, those with reciprocal kyphosis defined as an increase in kyphosis of the unfused thoracic spine of at least 15 degrees and at those with maintained thoracic alignment with less than a 15 degree change.The demographic characteristics were not different between those 2 groups.The reciprocal kyphosis group had a greater pelvic incidence minus lumbar lordosis mismatch at baseline and also less thoracic kyphosis.The novel radiographic parameter evaluated by this paper was the expected thoracic kyphosis, which the authors calculated as the pelvic incidence - 20 degrees, based on prior radiographic investigations.While the 2 groups had similar baseline expected thoracic kyphosis, the reciprocal kyphosis group had greater baseline thoracic kyphosis compensation, defined as the difference between expected thoracic kyphosis and baseline thoracic kyphosis (in other words, less thoracic kyphosis).The reciprocal kyphosis group underwent greater deformity correction compared to the maintained thoracic alignment group, with a significantly greater change in pelvic incidence minus lumbar lordosis mismatch, pelvic tilt, sagittal vertical axis, and T1 pelvic angle.At one year, the reciprocal kyphosis group had a markedly increased rate of proximal junctional kyphosis compared to the maintained thoracic alignment group, 66% versus 19%.Overall sagittal alignment measured by sagittal vertical axis was not significantly different for the 2 groups.There were also no differences in patient reported outcomes.

The authors have done a nice job continuing their work on understanding sagittal imbalance and its correction. In some ways, thoracic compensation through hypokyphosis is a more obvious compensatory mechanism than pelvic retroversion given that spine surgeons frequently observe it on full-length radiographs, whereas pelvic retroversion can be more subtle. It is not surprising that patients with baseline thoracic compensation are more likely to develop reciprocal kyphosis given that they, by definition, have more flexible thoracic spines. Reciprocal kyphosis that does not go on to develop PJK does not appear to be harmful given that these patients maintained their overall alignment and had similar patient reported outcomes compared to those who maintained their thoracic alignment. While those with reciprocal kyphosis had an increased risk of PJK compared to the maintained thoracic alignment, risk factors for PJK in the reciprocal kyphosis group were not well-defined. More aggressive correction is a well-known risk factor for PJK, and avoiding this in those with baseline thoracic hypokyphosis might be helpful. The authors suggested that those with baseline thoracic compensation might be candidates for fusion to the upper thoracic spine in order to prevent reciprocal kyphosis. Given that this group had overall post-operative sagittal alignment in the appropriate range despite the reciprocal kyphosis implies that a similar degree of lumbar correction while preventing reciprocal kyphosis could result in overcorrection and increased risk of PJK in the upper thoracic spine. Sagittal imbalance remains a challenging problem for which we still do not have a perfect solution. A better understanding of the non-skeletal factors such as anterior soft tissue contractures and central neural mechanisms that contribute to sagittal imbalance and PJK may lead to more effective treatment in the future.

Please read Dr. Protopsaltis's article on this topic in the November 1 issue. Does the concept of thoracic compensation improve your understanding of the sagittal imbalance? Let us know by leaving a comment on The Spine Blog.

All regional health systems have triage methods, either
formal or informal, for triaging and managing spine trauma. It is challenging to
provide high quality spine care across a region as spine fractures are common,
the majority are stable, and spine trauma specialists are relatively uncommon
and tend to be concentrated in trauma centers. This creates a high volume of
work, much of it dedicated to finding the small proportion of patients with unstable
fractures, while treating all fracture patients as if they have an unstable
spine until instability can be ruled out by an expert. In the United States,
with its decentralized health care system, inter-hospital policies about spine
trauma are uncommon. Spine fractures are managed by a variety of specialties
including emergency medicine, internal medicine, physiatry, orthopaedic
surgery, and neurosurgery, depending on local culture and expertise of the
managing physician. In countries with centralized, government-run health
systems, protocols can be developed to triage and manage spine trauma patients
in a more consistent pattern across the system. In order to evaluate the
efficiency of the United Kingdom (UK) system, Hill and Marynissen analyzed 100
consecutive spine trauma patients managed by general orthopaedists at their
district general hospital using an electronic consultation with spine specialists
at the regional trauma hospital. The high-energy trauma and spinal cord
injuries were directly transferred to the trauma center and were not included
in this analysis. In the current cohort, the average age was 86 years, and 85%
were female, suggesting that the vast majority of fractures represented low
energy fragility fractures. Only 6% of patients had an unstable injury, and 17%
were found to have no fracture after work-up. Eighty percent of patients
underwent a CT scan, and 37% underwent MRI as recommended by the spine trauma
specialist. The average response time from consultation until the spine
specialist made initial recommendations was 19 hours, and the median time to
complete imaging and develop a definitive management plan was 72 hours. British
orthopaedic guidelines recommend that spinal immobilization should be
maintained for no more than 48 hours, and the authors found that only 34% of
patients had imaging completed and a definitive plan within that timeframe.

The authors have produced an interesting study looking at a
health delivery topic that affects patients and providers on a daily basis. Similar
issues arise in the United States, though there tend not to be spine-specific
protocols for inter-hospital consultation. In general, patients will frequently
present to smaller hospital emergency departments where low energy spine
fractures are diagnosed. If the local physicians are trained in treating these
stable fractures, sometimes no consultation to the trauma center is generated.
Other times, it results in a phone call and triage of the patient by the trauma
center. Now that images can be transmitted electronically, the spine
specialists can usually evaluate the images and make recommendations. In the
past, this was not possible, and management was based on radiology reports,
which varied in their accuracy. Unlike the British system, in which the spine
consultation can be delayed depending on the availability of the spine
specialist, in the United States there is an expectation that this consultation
be performed whenever the phone call is placed. As a result, many of the
consultations are performed by on-call residents. In some cases, this can
result in inappropriate management, either missing an unstable fracture or
transferring a patient to a higher level of care unnecessarily. I am unaware of
a paper looking at this topic in the United States, but it seems likely that the
time to imaging or definitive management is faster in the US system. One of the
barriers to speedy care in the British system is the lack of 24-hour access to
MRI. While many small US hospitals lack MRI access, this situation frequently
results in the transfer of patients to tertiary care centers when a consultant
recommends that study. This paper does a nice job shining a light on factors
that delay care and are likely present in most health systems. Hopefully more
work can be done on this topic to expedite care and prevent the negative side
effects of unnecessary, prolonged bedrest on the vulnerable elderly population.

Please read this article in the October 15 issue. Does this
change how you consider the regional management of spine trauma patients? Let
us know by leaving a comment on The Spine Blog.
Adam Pearson, MD, MS

Patients frequently ask when they can return to driving after cervical spine surgery, and surgeons have to make recommendations without much evidence supporting their advice. There are a handful of studies looking at this, and most make recommendations based on driver reaction time, a metric that evaluates a specific, focused factor that probably does not capture overall driving ability.1 Given the lack of evidence on this topic, Dr. Moses and colleagues performed a survey of Cervical Spine Research Society (CSRS) meeting attendees to determine practice patterns and see if a consensus exists on the topic. In addition to evaluating return to driving, they also evaluated recommendations for post-operative cervical collar use, which would likely impact decisions about return to driving. They had 71/98 surveys they handed out completed, and 80% of respondents were orthopaedic spine surgeons (20% neurosurgeons). In general, most surgeons allowed patients to return to driving between 2 and 6 weeks, with 2 weeks being the most common response (40%) following 1 or 2 level ACDF or disk replacement. Six weeks was the most common response for > 2 level ACDF (52%) or laminectomy and fusion (60%). About 40% of surgeons recommended less than 2 weeks of driving restriction following foraminotomy or disk replacement. In terms of collar use, there was also a broad range of practices. Following single-level ACDF, 42% used no collar, 27% used a hard collar, and 31% used a soft collar. Of those who used a collar, 40% used it for 2 weeks and 36% for 6 weeks. For > 2 level ACDF, 23% used no collar, 68% used a hard collar, and 10% used a soft collar. The majority (57%) kept the collar in place for 6 weeks. Similar bracing patterns were reported for laminectomy and fusion. Following foraminotomy and disk replacement, approximately 2/3 of surgeons used no collar, and under 10% used a hard collar. Surgeons with over 15 years of practice experience were more likely to allow patients to return to driving in under 2 weeks following > 2 level ACDF and laminectomy and fusion (47% vs. 24%).

The authors have done a nice job administering a survey on a topic on which there is very little literature to guide practice. In these situations, consensus is likely the best way to determine best practice patterns. Not surprisingly, given the lack of evidence on this topic, there was no clear consensus and a wide variation in practice. When deciding about return to driving, one must consider reaction time, range of motion, pain severity, and judgment. All of these impact driving ability, and cervical spine surgery, collar use, and opioid medication can adversely affect these factors. There is one RCT that demonstrated no advantage to collar use following one level ACDF, but no data looking at other procedures.2 The authors point out that driving while wearing a cervical collar or while taking narcotic pain medication likely put the patient and the surgeon who approved return to driving in legal jeopardy. Both factors almost certainly have a negative effect on driving performance. The most concerning finding in the paper is that 28% of surgeons allow patients to drive while taking opioids, and 31% allow driving in a cervical collar. It seems as though these practices should be changed in order to promote driver safety and reduce legal exposure. This survey has the same limitations of all survey studies, the most important of which is potential limited generalizability to the entire population of spine surgeons. However, the authors reported a 70% response rate, which is relatively good, and this group of surgeons at CSRS likely have practices that mirror those of the greater spine surgeon community. The most important finding of this study is the wide variation in practice, as would be expected given the lack of evidence on the topic. It would be helpful if professional societies could issue guidelines on collar use and return to driving as these issues come up after every surgery, and it would be nice to inform patients that our recommendations are based on something (slightly) more solid that our individual opinions. Such guidelines could also provide some legal cover, provided they are followed.

Please read Dr. Moses's article on this topic in the October 15 issue. Does this change how you view collar use and return to driving following cervical spine surgery? Let us know by leaving a comment on The Spine Blog.

​Physical therapy (PT) has been a mainstay in the treatment
of acute low back pain (LBP), along with admonitions to avoid bedrest, anti-inflammatory
medication, and education about the benign natural history of the condition. Level
1 data on these interventions are sparse, and, given the high incidence of
acute LBP, evidence should exist to guide treatment as the potential societal
cost of these interventions is relatively high. In order to address this, Dr.
Rhon and colleagues performed an RCT comparing usual care to early physical
therapy for the treatment of acute LBP in an active military population. They enrolled
119 patients who all attended a 20 minute educational class about acute LBP and
were then randomized to receive usual care (UC) or 8 PT sessions over the next
3 weeks. The primary outcome measure was the Oswestry Disability Index (ODI)
score at one year, and they also recorded numeric pain rating scales and
healthcare related costs. The average age was 27, and 85% of patients were
male. The only patient reported outcome that was significantly different was the
4 week ODI score, which was 4 points better in the PT group. The PT group spent
about $1,000 more on LBP related treatments, though overall one-year healthcare
costs were similar for the two groups. The UC group spent about $700 more on
non-LBP related care.

The authors have done a nice job performing an RCT to study
the effect of early PT on acute LBP outcomes. Their results suggest that early
PT does not yield any long-term advantage compared to UC in active military
patients with acute LBP, and the short-term advantage is likely clinically
insignificant. An important consideration when interpreting these data is the
population to which they apply:active
military patients with acute LBP. Randomized trials offer the benefit of
eliminating sources of bias and confounding, however, they tend to answer very
narrow questions in very specific populations. These results are not
generalizable to the non-military population, to patients with chronic LBP, or
to those with radiculopathy or claudication. The other major limitation of this
study is that it was likely underpowered according to the authors’ power analysis,
however, there were no trends indicating that PT likely resulted in a
clinically significant long-term benefit compared to UC. The most important
finding from this trial is the benign natural history of acute LBP in a young,
fit military population. These patients should be reassured that they will most
likely get better regardless of treatment received, and expensive or
time-consuming early treatment is probably not necessary. For the minority
whose symptoms persist, more intensive treatment can be started.

Please read Dr. Rhon’s article on this topic in the October
1 issue. Does this change your view of the role of PT in acute LBP? Let us know
by leaving a comment on The Spine Blog.
Adam Pearson, MD, MS

The best treatment for axial low back pain (LBP) in patients
with degenerative disk disease (DDD) remains unknown, with relatively modest
improvements reported after both surgical and non-operative treatment. Most of
the RCTs on the topic have demonstrated no differences in outcomes between fusion
and non-operative care,1-3
though a more recent Japanese RCT demonstrated superior outcomes for the fusion
group.4 This study by
Ohtori et al. also compared outcomes between ALIF and posterolateral fusion
(PLF) and showed superior outcomes for the ALIF patients on some outcome
measures. The groups were small (15 ALIFs, 6 PLFs), so strong conclusions
regarding the advantage of one technique over the other could not be drawn. In
order to evaluate their outcomes with ALIF for DDD, Dr. Kleimeyer and
colleagues from Stanford and MGH retrospectively compared patient reported
outcomes between 42 patients who underwent ALIF and 33 who were treated
non-operatively for one or two level DDD. At a mean follow-up of 7.4 years,
they found that the ALIF patients improved significantly more on a visual
analog scale for back pain (3.4 vs. 0.9) and on the ODI (14 points vs. -0.2
points) compared to the non-operative patients. At baseline, the two groups
were similar, with the surgery group trending towards having more smokers and
patients involved in active litigation. Baseline ODI scores indicated
relatively mild disability (27.4 in the ALIF group and 23.9 in the
non-operative group), and patients had a mean age of about 50. Complications
were low in the ALIF group, with about a 10% pseudarthrosis rate (1 reoperation
for pseudarthrosis) and 12% rate of radiographic, asymptomatic adjacent segment
degeneration. The authors concluded that ALIF was more effective for axial back
pain associated with one or two level DDD than non-operative treatment.

While RCTs performed in Europe in the early 2000s painted a
grim picture for the effectiveness of fusion for DDD, the RCT by Ohtori and the
current study suggest more favorable outcomes. The major limitation of the
current study is that it is a non-randomized, retrospective cohort study prone
to bias. The two groups appeared similar on measured baseline characteristics,
but factors such as educational attainment, socioeconomic status, body mass
index, and psychological and medical comorbidities were not measured. These
have all shown to be strong predictors of outcomes following lumbar surgery and
may have been different between the two groups. The two groups were also
inherently different in that one group chose to have surgery, while the other group
chose not to. The patients also had relatively low levels of baseline disability
with ODI scores in the 20s, and the majority were working preoperatively. In
comparison, the patients in the Fairbank, Bros, and Fritzell RCTs had baseline ODI
scores in the 40s, and the majority were out of work. In the Ohtori, the
baseline ODI scores were in the 60s. The European studies demonstrated 12-14
point improvements on the ODI with surgery, similar to the current study. These
older studies also showed similar, if slightly less, improvement with
non-operative care compared to the surgical groups, while the current study
showed effectively no improvement with non-operative care. The European RCTs
all employed PLF, and it is possible that ALIF has better outcomes for this
problem. Given that the current study represents Level III data, one cannot use
it to conclude that ALIF is superior to non-operative treatment for axial LBP
associated with DDD. However, this and the Ohtori study should provide
motivation to perform an RCT comparing ALIF to structured non-operative care
for these patients, preferably in a US or international population.

Please read Dr. Kleimeyer’s article on this topic in the
October 1 issue. Does this change how you view the role of ALIF for the treatment
of DDD? Let us know by leaving a comment on The Spine Blog.
Adam Pearson, MD, MS

​It is well-established that greater surgical invasiveness
and patient frailty are associated with increased complications. However, the
interplay of those two factors has not been explored across different lumbar
diagnoses. Additionally, the effect of frailty on patient reported outcomes is
also not well-understood. In order to better understand these issues, Dr. Yagi
and colleagues from Japan evaluated patient reported outcomes (PROs) and complications
following surgery for adult spinal deformity (ASD), lumbar degenerative
spondylolisthesis (DS), and lumbar spinal stenosis (SpS) in over 450
consecutive patients with at least 2 years of follow-up. They recorded baseline
modified frailty index (mFI) and Charlson Comorbidity Index (CCI) and also
recorded baseline and two year PROs (ODI and SRS scores for ASD patients, SF-36
PCS and MCS scores for DS and SpS patients). At baseline, they found the ASD
patients had significantly higher mFI and CCI than DS and SPS patients. When stratifying
by frailty and CCI, they found that patients with greater frailty and
comorbidities tended to have worse baseline PROs. For the ASD patients, two
year ODI scores were significantly worse for the most frail patients, while the
2 year PCS scores for DS and SpS patients were not associated with frailty. The
authors did not calculate change scores across frailty strata, though the
figures appear to show similar change scores for the different frailty groups
or possibly even greater improvements for the frailest patients given their
worse baseline scores. Major complication rates were highest in the ASD group
and increased with increasing frailty, while the patterns were less consistent
for DS and SpS. The authors concluded that frailty was associated with worse PROs
and increased complications in ASD surgery, but that those relationships did
not hold in DS and SpS.

The authors have done a nice job analyzing the interaction
between frailty and surgical invasiveness and have confirmed what surgeons
would expect—namely that frail patients have worse outcomes and higher
complication rates than healthier, more robust patients. It is somewhat
difficult to interpret the results for DS and SpS given that only 10% of DS
patients and 2% of SpS patients were classified as frail (compared to 24% for
ASD). Additionally, the authors recorded different outcome measures for ASD (namely
the ODI) than for DS and SpS (namely the PCS), making comparisons of PROs
across the diagnostic categories impossible. Most surgeons would believe that
increasing frailty would lead to worse outcomes and higher complications
following any surgery, not just major deformity surgery. While the differences
may be more pronounced with major surgery, the relationship should hold for any
type of surgery as long as the study is appropriately powered. Given the low
number of frail patients in the DS and SpS cohorts, this analysis was likely
underpowered. The authors also did not provide many details about the ASD
surgery other than stating that patients had Cobb angles over 20 degrees and
fusions spanning at least five levels. While the authors conclude that PROs are
worse for the frailest ASD patients as compared to the healthier patients, the
change scores are actually greatest for this group. This suggests that frail
patients still gain significant benefit from ASD surgery, but they experience a
high rate of complications. The paper does not provide any clear cut decision
rule or algorithm to help with the decision about proceeding with ASD surgery in
the elderly population, though surgeons can inform these patients that surgery
will likely lead to better pain and function than their baseline but that they
may have a bumpy road getting there.

Please read Dr. Yagi’s paper in the September issue. Does this
change how you view the role of frailty in surgical decision making? Let us
know by leaving a comment on The Spine Blog.
Adam Pearson, MD, MS

As the population continues to age, the number of fragility
fractures including hip fractures, distal radius fractures, and Type II dens
fractures is on the rise. The best treatment for Type II dens fractures in the
elderly remains controversial, with both observational cohort studies and large
database analyses suggesting a possible survival advantage associated with
surgery.1-3
These studies have generally included all patients over 65, and approximately
50% of Type II dens fracture patients are over 80. One prior study has looked at the over 80 age
group exclusively and reported in-hospital mortality rates of 15% for
non-surgical patients and 12.5% in surgical patients.4
Large series of octogenarians with Type II dens fractures treated surgically
are rare, but the group at Shock Trauma in Baltimore was able to assemble a
consecutive series of 43 such patients treated over a 10 year period. All
patients had significantly displaced (> 5 mm) and/or angulated (> 15
degrees) fractures and underwent posterior C1-C2 Harms fusion. They noted the
majority of patients with less displaced fractures were treated in a collar. The
average age was 84, and patients had a mean Charlson Comorbidity Index of 1.4.
They reported a 30 day mortality rate of 2.3% (one patient), with a 1 year
mortality rate of 19%. There was a high rate of complications, notably delirium
(42%), dysphagia (28%), feeding tube placement (14%), and reintubation (9%). They
did not report any cases of infection or hardware failure. The authors
concluded that this represented an acceptably low mortality rate in-line with
or better than previous reports.

The authors have done a nice job putting together a large
case series of octogenarians undergoing surgery for displaced Type II dens
fractures. Such a series can probably only be put together at a handful of
trauma centers in the United States, and they only had about 4 of these cases
per year at a very busy institution. While the results are reassuring and
suggest that this surgery can be done with a low short-term mortality rate in
this population, it does not provide us with much guidance on how to treat the
average, elderly patient with a Type II dens fracture. Most of these fractures
are less displaced than those included in the current study, and it is not
clear if surgery is advantageous in this population. While multiple studies
have shown a potential survival advantage associated with surgery, these have
all been observational studies subject to a high risk of selection bias. While statistical
efforts can be made to control for potential confounders, unmeasured confounders
always exist. The current study did not offer a control group to which outcomes
could be compared, and it is possible that the patients treated with surgery
were substantially healthier than those treated non-operatively. It is not
clear if the low mortality rate observed in the current series is due to the
high quality care provided or due to the selection of healthier patients than those
included in prior studies. The low mortality rate is reassuring and should give
surgeons treating these challenging patients some peace of mind when they
decide to operate on a displaced Type II dens fracture. Until a high quality
RCT is done to address this question, the optimal treatment for less displaced
Type II dens fractures in the elderly will remain unknown. Given the challenges
involved in completing such a study, I would not anticipate seeing Level 1 data
anytime soon.
Please read Dr. Clark’s article in the September 15 issue. Does this change how
you view surgical treatment of displaced Type II dens fractures in the elderly?
Let us know by leaving a comment on The Spine Blog.

​The spine literature is replete with basic and clinical
science papers that frequently do not offer much practical assistance in the
day to day practice of a spine care provider. One of the challenges facing
spine centers is matching new patients to the appropriate provider. Many
multidisciplinary spine centers combine non-interventional spine care
providers, pain physicians who perform injections, and orthopaedic and
neurosurgical spine surgeons into a large group practice. Given that only about
10% of spine patients will go onto spine surgery, some type of triage process
is necessary to identify new patients who are appropriate to see a surgeon on
their first visit. An alternative approach is to have non-operative providers
screen all patients prior to surgeon referral, but this is less efficient than
identifying likely surgical candidates in the scheduling process. Most spine
centers have a non-evidence-based triage process that is designed using empiric
principles rather than actual data. In order to improve this process, Dr. Boden
and colleagues from Emory Spine Center in Atlanta reviewed over 8,000 patients
who had seen their spine surgeons for lumbar problems over an 11 year period.
They created a multivariate model to determine the strongest predictors of
undergoing lumbar surgery within a year of presentation to the surgeon. All of
the analyzed factors came from a patient questionnaire about their demographics
characteristics, history, and current symptoms. No data from radiographic
reports, validated spine specific outcome questionnaires, or patient treatment
preferences were included. They found that the presence of leg symptoms was by
far the strongest predictor of undergoing surgery, with an odds ratio of 45
compared to patients with only low back pain. As such, the authors limited the
analysis to only the patients with leg symptoms. In this group, the strongest
predictors were the presence of leg pain (OR = 4.1), leg pain worse than back
pain (OR = 2.0), non-smoker (OR = 1.4), worsening leg pain (OR = 2.1), and age
over 65 (OR = 1.2). Six other factors were also significant predictors of
undergoing surgery. Based on these results, they created scoring systems based
on 11 and 5 questions, which stratified patients into low, medium, and high likelihood
of undergoing surgery. These groups had surgery rates of approximately 33%,
43%, and 55% (5 question)-58% (11 question), respectively. This compares to a
baseline surgery rate of 40% after using the traditional triage process.

The authors have done nice work creating an evidence-based
approach to triaging patients in a multidisciplinary spine clinic. They
validated the model in their own population, suggesting that it is at least
internally valid. As they point out, the rule may not work in different patient
populations or with surgeons with different indications for surgery. The rule
has face validity given that leg pain is present in patients with radiculopathy
or neurogenic claudication, the two main clinical conditions for which lumbar
surgery is performed. Questions about smoking, employment, and BMI suggest that
this group of surgeons is less likely to operate on smokers, the obese, and
patients on worker’s comp or disability, groups that are all known to have
worse surgical outcomes. Other surgeons may operate on these patients more
frequently. Two other important factors that could be evaluated in the
scheduling process that were not included in this analysis are radiographic
findings and patient enthusiasm for surgery. While only clinically trained
providers can evaluate imaging definitively, certain buzzwords in MRI reports can
be used by administrative staff to increase the likelihood of a patient being a
surgical candidate (i.e. disk extrusion, severe stenosis, spondylolisthesis,
etc.) Additionally, patients who report they have no interest in spine surgery
are less likely to undergo surgery, and this could be used in the screening
process. Whether or not this triage rule is helpful to other spine centers
remains to be seen. However, it does provide some additional information to spine
centers planning a triage questionnaire and may be motivation for other groups to
perform a similar analysis of their practice. The Spine editors should be
congratulated for publishing this type of practical health services research,
which may directly impact how clinicians structure their practice.

Please read Dr. Boden’s article in the September 15 issue and
the accompanying Point of View by Dr. Pugely. Would this type of triage process
help your practice? Let us know by leaving a comment on The Spine Blog.
Adam Pearson, MD, MS

​The opioid crisis has been in the news for years, with no indication that it will be successfully controlled anytime in the near future. Physicians have been a major contributor to the problem through their prescribing narcotics for chronic musculoskeletal and other non-cancer pain. Studies have shown that patients who are taking narcotics prior to spine surgery and those who remain on these medications for prolonged periods post-operatively have worse outcomes than patients not using narcotics. As such, it would be useful to determine risk factors for prolonged narcotic use following lumbar fusion so that modifiable factors could be addressed pre-operatively and patients could be counseled about their risk of long-term use. In order to better understand risk factors for long-term opioid use following lumbar fusion, Dr. Kalakoti and colleagues from Iowa used Humana claims data to determine risk factors for opioid use one year after lumbar fusion. They identified over 26,000 patients who underwent lumbar fusion (ALIF, P/TLIF, posterolateral fusion, or AP fusion) from 2007-2015 who had claims data available at least 3 months prior to surgery and for at least 12 months post-operatively. Patients were classified as opioid users (OU) who had received a prescription within the 3 months leading up to surgery (58%) or opioid naïve (ON). Opioid users were more likely to be under 50 (9.2% OU vs. 5.1% ON), male (42% OU vs. 40% ON), and live in the South (66% OU vs. 60% ON). At one year, 42% of OU and 9% of ON patients were still receiving prescriptions for narcotics. Multivariate regression demonstrated that pre-operative opioid use was the strongest predictor of opioid usage at 1 year following surgery (ORs between 4.6 and 7.8 for the different fusion techniques). Other well-known risk factors such as depression and fibromyalgia were also independent predictors of long-term opioid use, but pre-operative narcotic use was by far the strongest predictor.

The results of this study come as no surprise to spine surgeons and others that care for patients following lumbar fusion. The fact that the majority of patients had received a narcotic prescription before surgery was somewhat disconcerting given that back and radicular pain are not good indications for narcotic use. The use of a large claims database was a good method to gauge narcotic use across the United States over nearly a decade. The major limitations related to claims data are that details such as duration and dosage of narcotics, indications for surgery, patient-reported outcomes, smoking status, and worker's compensation status are lacking. Nonetheless, pre-operative narcotic use was such a strong risk factor for long-term use that inclusion of these other factors would be unlikely to change the primary conclusion. The main question raised by these findings is whether weaning off opioids pre-operatively—a difficult thing to do for patients in chronic pain—would decrease the risk of long-term use post-operatively. While the authors suggest this is true, this paper offers no data to indicate that is the case. It is possible that weaning opioids pre-operatively will not change patients' predisposition for long-term use post-operatively. Certain non-modifiable psychological and physiological factors may put some patients at risk for chronic pain and long-term narcotic use that would persist despite pre-operative weaning. Nonetheless, minimizing pre-operative opioid use makes good sense even without Level 1 evidence supporting it. The most important message from this and other papers on this topic is that narcotics are not appropriate for treating back pain, and providers caring for spine patients should generally not prescribe them other than for acute post-surgical pain.

Please read Dr. Kalakoti's article on this topic in the September 1 issue. Does this change how you view opioid prescribing for lumbar fusion patients? Let us know by leaving a comment on The Spine Blog.

​Spinesurgeons are familiar with the pitfalls associated with thoracolumbar fusion in osteoporotic patients, namely screw loosening, cage subsidence, fracture, and pseudarthrosis. Given the increasing rate of spinal fusion in elderly patients, surgeons are encountering osteoporotic bone more frequently. Bisphosphonates and teriparatide are the most commonly used medications to treat osteoporosis. However, animal models have raised questions about bisphosphonates potentially interfering with bone healing and spinal fusion. On the other hand, teriparatide has been consistently shown to improve spinal fusion in animal models. In order to better clarify the literature on this topic, Dr. Buerba and colleagues performed a meta-analysis looking at the effect of bisphosphates and teriparatide on fusion rate, screw loosening, fracture, and patient reported outcomes. They identified 9 comparative studies, including 3 RCTs, 4 prospective cohort studies, and 2 retrospective cohort studies. Four compared bisphosphonates to controls and demonstrated trends towards increased fusion rate (OR = 2.2. p = 0.09) and lower screw loosening rate (OR = 0.45, p = 0.19) for the bisphosphonate group. Only one study compared teriparatide to a control group, and this showed higher fusion rates and lower screw loosening in the teriparatide group. In two studies comparing bisphosphonates to teriparatide, the teriparatide group had a significantly higher fusion rate (OR = 2.3, p < 0.0001) and trend towards lower rate of screw loosening (OR = 0.37, p = 0.09). Compared to controls, bisphosphonates were associated with a lower fracture rate at the fused or adjacent levels (OR = 0.18, p = 0.0007). Patient reported outcomes were generally not different between the groups.

The authors have done a nice job quantitatively summarizing a heterogeneous literature on this topic. This heterogeneity is also what makes interpreting the results difficult, as treatment duration, dose, type of bisphosphonate, and definition of osteoporosis varied across studies. Bisphosphates work on a complex pathway in which osteoclast inhibition can effect both bone resorption and formation, and we do not have a clear understanding of the effect of specific drug type, duration of use, or dosage on spinal fusion. The effect of teriparatide seems more straightforward, though duration of treatment remains variable across studies. There was also a heterogeneity of surgeries included in the studies, ranging from one level fusions for degenerative spondylolisthesis to long thoracolumbar fusions for deformity. Despite this being a meta-analysis, due to the different comparisons across studies (i.e. bisphosphonate or teriparatide vs. control, bisphosphonate vs. teriparatide), the actual number of patients in each comparison was relatively low and limited the study's power. Despite these limitations, the study does allow some big picture conclusions. The most important is likely that bisphosphonates due not seem to impair fusion, so it may be better to continue these medications in osteoporotic patients rather than stop them prior to fusion. Additionally, teriparatide seems to be favored over bisphosphonates if a new agent is going to be started peri-operatively. Based on the available literature, teriparatide appears to be indicated for osteoporotic patients undergoing spinal fusion. Future studies are needed to define when to start the medication pre-operatively, optimal dosage, and duration of treatment.

Please read Dr. Buerba's article on this topic in the September 1 issue. Does this change how you view the role of bisphosphonates and teriparatide in thoracolumbar spinal fusion? Let us know by leaving a comment on The Spine Blog.

​Expensive surgical procedures including spinal fusions have become targets for cost reduction programs. It is well-established that patient comorbidities, more complex surgeries, and complications increase costs. Less well-studied is the effect of individual surgeons on costs. In order to better explore this, Dr. Sielatycki and colleagues from Vanderbilt University compared the costs and patient reported outcomes associated with ACDF among 5 spine surgeons at their institution. They included 431 elective ACDF cases (1 to 3 levels) and determined the inpatient and 90-day costs based on billing data and average Medicare reimbursements for the inpatient, surgeon, and outpatient services. Patient reported outcomes were recorded at baseline and 3 months and included the neck disability index, EQ-5D, and numeric rating scale for arm and neck pain. Baseline patient characteristics were similar for all 5 surgeons, and predicted costs adjusted for patient characteristics were comparable across surgeons as well. However, the actual costs were significantly different among surgeons, with inpatient costs ranging from $10,522 for the least expensive surgeon to $15,366 for the most expensive. Differences were not quite as pronounced for 90 day costs but still varied by 25%. Logistic regression controlling for patient and surgery factors demonstrated that complications were the strongest predictor of higher costs, followed by number of levels fused. Individual surgeon was also a strong predictor of cost, with one surgeon having a 150% increase in the odds of having a patient in the highest cost quartile than the median cost surgeon. There were no significant differences in patient reported outcomes across surgeons.

The authors have done a nice job demonstrating significant variation in costs for routine ACDF among five different surgeons at the same institution. What the paper does not tell us are the reasons behind the variation. The methods for determining costs were somewhat indirect, based on average Medicare reimbursements at the DRG and CPT level. While this might approximate costs on a large scale across institutions, actual costs can vary substantially on a patient-by-patient basis. Given that one of the surgeons operated on only 22 patients, these cost estimates may not have been accurate. The authors also did not attempt to explain why inpatient costs varied so much. One would expect similar inpatient cost estimates based on DRG and surgeon fee, but the costs varied by 45%. The main drivers of inpatient cost differences among surgeons are probably implant and bone graft choices, and these were not evaluated. Length of stay, operative time, and post-operative care including imagining, office visits, and PT could also vary among surgeons and affect costs. Complications, reoperations, and readmissions are major cost drivers and can vary among surgeons, though these tended not to vary much across surgeons in this study. While it has significant limitations, this study shines a light on the role individual surgeons can play in driving costs. It comes as no surprise that increased costs were not associated with better outcomes. Hopefully this paper will encourage more detailed cost analyses that evaluate the role of specific surgeon decisions (such as implant selection) on costs.

Please read Dr. Sielatycki's paper in the August 15 issue. Does this change how you view the role of the individual surgeon in driving costs? Let us know by leaving a comment on The Spine Blog.

​Antifibrinolytics have gained popularity in cardiac surgery, total joint arthroplasty, and spinal deformity surgery for their ability to decrease blood loss without any apparent associated complications. Many studies have looked at these agents in spine surgery, and Dr. Lu and colleagues from Australia performed a meta-analysis in order to synthesize the data from these investigations. They included 11 RCTs that compared either tranexamic acid (TXA) or epsilon aminocaproic acid (ACA) to placebo in spine surgery. Nine RCTs evaluated TXA, 2 evaluated ACA, and 1 included both agents. The specific type of surgery was not specified, though the majority of studies appeared to include primarily posterior thoracolumbar deformity surgery. Overall, they found a mean reduction of intra-operative blood loss of 127 mL, with average blood loss around 1,300 mL. Total blood loss, including post-operative drain output, was reduced by 230 mL. The patients receiving antifibrinolytics had 42% lower odds of receiving a blood transfusion (21% vs. 30%), and, among those receiving a transfusion, the antifibrinolytic patients received about 0.5 units less in transfused blood. There were no significant differences in overall complication rates or in thromboembolic events between the two groups. The authors concluded that antifibrinolytics could reduce blood loss and transfusion rates in spine surgery.

This paper is a nice quantitative synthesis of the literature on this topic. The spine community generally agrees that antifibrinolytics reduce blood loss, though the indication for its use and dosing parameters remain poorly defined. This analysis included studies with mean blood loss ranging from 50 mL to 3L, indicating that the studies included a wide range of surgeries, from small cervical cases to major deformity corrections. Most spine surgeons now use TXA for large deformity cases, though it is not clear if it is indicated for a single level laminectomy and fusion. It is rarely used for single level decompressions or anterior cervical surgery, where blood loss is very low. Among the included studies, the loading doses varied three-fold, from 10 mg/kg to 30 mg/kg. It is unclear if increasing the dose in this range increases efficacy. The most remarkable aspect of antifibrinolytics is that they seem to have minimal or no side effects and do not seem to increase the rate of thromboembolic events. Additionally, they are relatively inexpensive. This meta-analysis does a nice job summarizing the literature on antifibrinolytics in spine surgery. Its use for major deformity surgery is widely accepted. The spine surgery community now needs to determine for what other cases it is indicated.

Please read Dr. Lu's article on this topic in the August 15 issue. Does this change how you view the use of antifibrinolytics in spine surgery? Let us know by leaving a comment on The Spine Blog.

Data regarding spine surgeon reimbursement for specific procedures is hard to come by. Very little has been published on this topic in the spine literature, yet reimbursement patterns likely have an effect on surgeon decision-making. In order to better understand this topic, Dr. Meyers and colleagues from an academic private practice neurosurgery group in Buffalo, NY reviewed their own reimbursement data from 2010-2016. They determined average reimbursement for the 20 most common CPT codes each year stratified by insurance type (Medicare, Medicaid, Private, and Workers' Compensation). They adjusted for changes in case mix and payor mix and reported all results in 2010 US dollars. Their payor mix was relatively stable over time (about 50% private, 30% Medicare, 15% Workers' Compenseation, and under 10% Medicaid). Their main outcome was average reimbursement per CPT code, which they compared over the time frame under study. Overall, they found an average increase of 14% for the average CPT code from 2010-2016. This increase was driven by the changes in Medicaid reimbursement, which increased 150% over that period. All other insurance types had decreased reimbursement from 2010-2013, with subsequent increases from 2014-2016. Private insurance decreased reimbursement from 2010-2016 by 9%, while Medicare and Workers' Compensation increased reimbursement by 4% and 8%, respectively.

The authors should be congratulated for analyzing a topic that is rarely discussed in the spine literature. Surgeon reimbursement likely affects surgical decision making, and understanding reimbursement patterns is important in this era of healthcare reform. Overall, the data should be fairly reassuring to spine surgeons as it shows that overall reimbursement is up, even after adjusting for inflation. The most striking finding is the 150% increase in Medicare reimbursement. In fact, average Medicaid reimbursement surpassed Medicare reimbursement in 2015 and 2016. The Medicaid population would have had an even more profound effect on average reimbursement, but Medicaid beneficiaries made up less than 10% of their patients. The limitations of this study need to be considered while interpreting the data. The most important limitation is that this paper represents the reimbursement for a single group of neurosurgeons practicing in Western New York. The high increase in Medicaid reimbursement may have been unique to New York state, and other states probably did not see that level of increase. Similar studies looking at reimbursement across regions would be interesting and shed some light on geographic variation. The amount of data presented in the paper was relatively minimal and consisted of primarily the reimbursement for the "average" code. Given that the average reimbursement per code varied 20-fold ($85 for local bone grafting to $1644 for TLIF+posterolateral fusion), more granular data at the CPT code level would have been nice to see, though the data tables would have been overwhelming. Overall, the paper paints a relatively good picture for reimbursement for spine surgery in the 2010s. Hopefully more papers like this will be forthcoming and allow surgeons, administrators, and policy makers to get a better understanding of reimbursement issues over time.

Please read Dr. Meyers's paper on this topic in the August 1 issue. Does this change how you consider changes to spine surgeon reimbursement over time? Let us know by leaving a comment on The Spine Blog.

​Since the publication of RCTs reaching opposite conclusions about the benefit of fusion in addition to laminectomy for lumbar degenerative spondylolisthesis (DS), the spine community has grappled with how to apply these results to the decision about whether to fuse DS patients.1,2 In an effort to better understand this on the population level, Daniel Vail and colleagues from Stanford used the MarketScan commercial insurance database to evaluate the effect of adding arthrodesis to laminectomy in the lumbar spondylolisthesis population (due to coding constraints, both degenerative and isthmic spondylolisthesis patients were included). Their primary outcomes were cost, length of stay, complications, reoperation, readmission, and post-operative opioid use (as measured by milligram morphine equivalents) for two years following surgery. They identified over 73,000 spondylolisthesis patients undergoing laminectomy alone (8%) or laminectomy and fusion (92%) from 2007-2014. Over 93% of the fusion patients received instrumentation, and orthopaedic surgeons were somewhat more likely to perform a fusion than neurosurgeons (95% vs. 90%). At baseline, the laminectomy alone patients were older, included more males, had more comorbidities, and were more likely to be located in the West compared to the laminectomy and fusion patients. The authors used a statistical matching technique that led to the exclusion of about 5,000 patients but eliminated baseline differences between the two matched groups. The fusion patients stayed about 1 day longer in the hospital and were 15% more likely to have a complication, and the uninstrumented fusion population drove much of this increase in complication rate. The index hospitalization cost was about $24,000 more in the fusion group, though the post-hospitalization cost was about $14,000 more in the laminectomy alone group. This difference in post-discharge cost was likely due to the fusion group being 34% less likely to undergo a reoperation. The fusion patients were prescribed more opioids in the first 2 months following surgery, though after the first two months there was no difference in opioid prescriptions. Geography was the strongest predictor of post-operative opioid prescribing, with patients in Idaho receiving 8-times more opioids in the two years following surgery compared to patients in New Mexico.

The authors have done a nice job using a large dataset to evaluate the effect of fusion in addition to laminectomy on outcomes measurable using administrative data. The limitations of administrative databases always need to be considered when interpreting results based on them. This database does not include Medicare patients, so the population likely skewed younger than the overall spondylolisthesis population. The authors were also not able to distinguish isthmic from degenerative spondylolisthesis patients, though surgery for degenerative spondylolisthesis is much more prevalent than for the isthmic variety. It is also difficult to know how many patients changed insurance status in the two years following surgery, which could have resulted in some patients being lost from follow-up. The lack of patient reported outcomes is always a major problem with these studies, though there is a substantial literature available with these data. Overall, the results of this study is in line with prior studies on the topic. It is well-accepted that adding a fusion to laminectomy increases cost, hospital stay, and short term complications. The lower reoperation rate associated with fusion is consistent with the Ghogawala paper, though it would have been nice to know the absolute rather than just the relative rates. The geographic variation in the amount of opioid prescribed post-operatively is striking but not surprising. While this paper does not give us much novel information, it confirms findings from prior, smaller studies and supports widely held beliefs about this subject.

Please read the article on this topic in the August 1 issue. Does this change how you view the role of fusion for degenerative spondylolisthesis? Let us know by leaving a comment on The Spine Blog.

​The search for a highly efficacious, safe, and low cost bone graft substitute continues. While iliac crest bone graft is considered the gold standard, it has been largely abandoned due to concerns about donor site morbidity. Surgeons also prefer to avoid the time-consuming procedure which is hardly reimbursed. BMP-2 was widely adopted prior to 2011, then its use markedly decreased after safety concerns were published by Carragee et al.1 Hospitals have also pressured surgeons to avoid its use due to cost concerns. Local bone graft has been shown to be effective for one and two level instrumented fusions, though cases that do not yield sufficient local bone graft and longer fusions require other bone graft options.2-4 Bone graft substitutes have been developed to fill this void, one of which is silicate calcium phosphate (SiCaP, marketed by Baxter Healthcare as Actifuse). This compound serves as an osteoconductive scaffold but has no osteoinductive properties. To better assess the efficacy of SiCaP, Dr. Coughlan and colleagues from Australia and the Netherlands performed an RCT in which 103 one or two level fusion patients were randomized to SiCaP or BMP-2, which was placed across the transverse processes. All patients also received pedicle screw instrumentation and PLIF, which included an interbody device and local bone graft in the disk space. The primary outcome was radiographic fusion at 12 months, based on evaluation of the intertransverse fusion on CT scans and motion on flexion-extension radiographs. Fusion status at 24 months, back and leg VAS, ODI, and SF-36 scores were also recorded. At 12 months, 78 patients had imaging available to assess fusion. The authors did not report the fusion rate for these 78 patients but instead included the patients without imaging and classified them as not fused. Using this approach, the fusion rate was 53% for the SiCaP patients and 56% for the BMP-2 patients. At 24 months, 92 of the patients had radiographic data available, and the same analysis yielded a fusion rate of 80% for both groups. A separate per protocol analysis excluded patients without radiographs and also eliminated 21 patients with protocol violations, 13 of which were related to not using Mastergraft granules in the BMP-2 patients. Sixty-two patients were included in this analysis at 12 months, with a fusion rate of 71% for SiCaP and 74% for BMP-2. In the per protocol analysis, the 24 month fusion rate increased to 79% for the SiCaP group and 85% for the BMP-2 group. Clinical outcomes and adverse events were similar, with some clinical outcomes favoring the SiCaP group at 6 months.

This is a complex study that is challenging to interpret. The major limitation is the peculiar study design in which the interbody fusion using local bone graft was not considered when evaluating fusion status, and fusion across the facet joints was also apparently ignored. Additionally, no local bone graft was used for the intertransverse fusion, though this would typically be performed when local bone graft was available. The comparison of SiCaP to BMP-2 is also questionable, as BMP-2 is generally not indicated for one and two level fusions when local bone graft is available. A more compelling study design would have been SiCaP + local bone graft vs. local bone graft in single level decompression and posterolateral instrumented fusion, as that is a common procedure for which surgeons need to decide about adding a bone graft extender to the local bone graft. The inclusion of an interbody fusion using an interbody device with local bone graft confounds the current analysis as it is not clear what effect an interbody fusion or nonunion would have on the intertransverse fusion. Patients with instrumented fusions frequently fuse across the facet joints, and this was apparently not considered in the current analysis. Additionally, the blinding of the radiologists interpreting the studies is questionable as the SiCaP is radiodense and remains present even in the absence of bone formation. Finally, the study was underpowered due to protocol violations, loss to follow-up, and a relatively low number of patients initially enrolled. Given these limitations, it is difficult to draw any conclusions from the current study. Hopefully higher quality studies evaluating the efficacy of bone graft extenders will be forthcoming. However, given the widespread use of these products despite a lack of evidence supporting their efficacy, companies that produce them may not be motivated to fund studies that could yield a negative result.

Please read Dr. Coughlan's article on this topic in the August 1 issue. Does this change your opinion about SiCaP as a bone graft substitute? Let us know by leaving a comment on The Spine Blog.

Hospital readmission is a costly problem that CMS is trying to address through financial penalties for institutions with high readmission rates, so hospital leaders are highly motivated to reduce these rates. Readmission following spine surgery is not uncommon, and high quality data are needed to determine risk factors and reasons for readmission that can be targeted for improvement. Most prior studies on this topic have used large administrative databases to address the question, and these frequently lack sufficient accuracy and granularity to allow for the identification of specific actionable improvement targets. In order to address these shortcomings in the literature, Dr. Hills and colleagues from Vanderbilt University Medical Center analyzed 6 years of their spine surgery registry to identify risk factors and reasons for readmissions within 90 days of discharge following elective spine surgery for degenerative conditions. They included over 2,700 spine surgery patients with at least 3 months of follow-up and identified readmissions at their institution and outside institutions through a patient self-reported survey. The overall readmission rate was 5.6%, with about half of readmissions for surgery-related complications, 40% for medical reasons, and 10% for pain control. Timing of readmission varied depending on the cause, with a mean of 12 days for CSF leak, 23 days for surgical site infection, 6 days for non-infectious wound complication, 38 days for surgical failure (i.e. hardware failure, inadequate decompression, etc.), 12 days for medical readmission, and 6 days for pain control. Of the surgery-related readmissions, surgical site infection was the most common cause (30%), with the remainder split about equally between CSF leak, non-infectious wound complication, and surgical failure. Multivariate analysis demonstrated that a history of MI, history of osteoporosis, higher pre-operative leg or arm pain scores, longer operative duration, and lumbar surgery (as opposed to cervical surgery) were all independent risk factors for readmission.

This paper is a nice addition to the literature as it uses a single-institution registry that allowed for chart review at a much more detailed level than is possible from an administrative database study. The authors were able to understand the specific reasons for readmission, which is not possible with billing codes. The overall findings were not surprising, namely that patients with a higher comorbidity burden undergoing big operations were at higher risk for readmission. The authors suggested that medical and pain readmissions are the easiest to address and suggested that PCP visits shortly after discharge and nursing phone calls to address pain management might decrease readmissions for these problems. Psychosocial challenges are also an important driver of readmission that the authors did not address in this paper. These problems can limit some patients' ability to cope with recovery after surgery and function independently. Mental illness, substance abuse, lack of social support, and poverty can all contribute to readmission for "medical" or "pain" reasons, and these are difficult to measure in a study or address peri-operatively. This type of paper can help us identify patients pre-operatively that we know are at high risk for post-operative problems, whether they be medical, pain-related, or psychosocial. Once identified, we can direct resources to support these patients to help reduce their risk of readmission. In general, the cost of these targeted resources, whether an additional PCP visit, nurse phone calls, pain clinic consultation, or social work assistance, is typically far less than the cost of a readmission.

Please read Dr. Hills's article on this topic in the July 15 issue. Does this change how you think about reducing readmissions following elective spine surgery? Let us know by leaving a comment on The Spine Blog.

​Sagittal facet orientation is a well-known risk factor for degenerative spondylolisthesis, but its role in other degenerative conditions has not been extensively investigated. A more sagittal orientation provides less resistance to anterior translation, and it makes sense that such an orientation would put the patient at risk for listhesis. In order to better evaluate the association between facet orientation and lumbar spinal stenosis (SpS) without listhesis, Dr. Liu and colleagues performed a case-control study in which 91 SpS patients were matched with 91 age- and sex-matched control patients without a diagnosis of SpS. The SpS patients had MRIs performed as part of their diagnostic evaluation, and the control patients underwent MRI after randomly being selected for the study. The main variable under investigation was facet orientation, defined on an axial MRI image as the angle between a line drawn through the anterior and posterior corners of the superior facet and a midsagittal line running through the center of the disk space. Similar to prior studies, they found that the facet joints became more coronal in orientation from L2-L3 to L5-S1. The main finding of the study was that the facet joints at every level were significantly more sagittal in orientation in the SpS group than in the control group. Women also tended to have more coronally oriented facet joints than men.

The authors have done a nice job demonstrating that stenosis patients have more sagittally oriented facet joints than an age- and sex-matched control group. This paper raises the question of whether a more sagittal facet orientation increases the risk for developing stenosis or if degenerative changes result in a more sagittal facet orientation. Sagittally oriented facets bear less load, and these motion segments may bear more load through the disks. It is possible that this promotes a degenerative cascade leading to stenosis. An alternative explanation is that baseline facet orientation is not related to degenerative changes and that the degenerative changes result in the change in orientation. Given that an earlier study by the authors showed that facets tend to become more sagittal with age, it may be that facet orientation is a result of rather than a cause of degenerative changes leading to stenosis. The only way to evaluate this hypothesis would be to obtain lumbar MRIs in a young population and follow them prospectively to determine if baseline facet orientation predicted the development of degenerative changes and stenosis. It is unclear if such a study would be performed given that facet orientation would be a non-modifiable risk factor that could not be treated. The major limitation of this study is that it shows correlation without being able to determine causation. Another potential technical issue is the difficulty in defining facet orientation as the facet joint surface is typically a curvilinear rather than linear structure. However, given the consistent findings, it is unlikely that measurement error played a major role. Finally, there is no mention of blinding, and the investigators performing the measurements likely knew the hypothesis and were aware of the presence of stenosis. This paper does a nice job demonstrating that sagittal facet orientation is associated with stenosis, adding to the literature that has shown it to be associated with degenerative spondylolisthesis. We will have to wait for a prospective study to answer the question about cause and effect.

Please read Dr. Liu's paper on this in the July 15 issue. Does this change how you view risk factors for stenosis? Let us know by leaving a comment on The Spine Blog.

​While multiple studies have suggested that prophylactic anticoagulant dosing is relatively safe following spine surgery, data looking at the results of therapeutic anticoagulation to address post-operative thromboembolic events (i.e. DVT, PE, MI) are scarce. In general, thromboembolic events following spine surgery are considered potentially fatal and need to be treated with therapeutic anticoagulation despite the risk of bleeding complications, including epidural hematoma and neurological decline. In order to better assess the effect of therapeutic anticoagulation following spine trauma surgery, Dr. Shiu and colleagues from the University of Maryland, home of a very busy Level 1 trauma center, retrospectively reviewed over 1,700 patients undergoing spine trauma surgery over a 14 year period. They found 62 patients with a diagnosis of post-operative DVT, PE, or MI who were treated with therapeutic anticoagulation (heparin drip, low molecular weight heparin [LMWH], or warfarin) and propensity-score matched them with 174 similar patients who did not have a thromboembolic event or receive therapeutic anticoagulation. Anticoagulation was started at a mean of 12 days following surgery (range 2-54 days), with 51% receiving heparin, 46% LWWH, and 3% warfarin. Eighteen percent of anticoagulation patients underwent reoperation at a median of 20 days following the index surgery compared to 10% of control patients at a median of 27 days. There were a wide range of indications for reoperation, with epidural hematoma with neurological decline occurring in 3% of anticoagulation patients compared to 1% of control patients. Subgroup analysis looking at type of anticoagulation demonstrated a much higher reoperation rate for the heparin group (31%) compared to the LMWH group (6.5%), with epidural hematoma (7% vs. 0%), wound complication (14% vs 3%), and non-spinal hemorrhage (10% vs. 0%) occurring more commonly in the heparin group.

The authors should be congratulated for putting together a cohort of 62 patients who underwent therapeutic anticoagulation for thromboembolic event following spine trauma surgery. Given that it took 14 years to create a cohort of this size at one of the country's busiest spine trauma centers, this is likely the largest such cohort that could be created at a single institution looking at this issue. That being said, 62 patients is a relatively low number for statistical analysis, especially for subgroup analyses or evaluation of bleeding complications, which are relatively rare. In order to gain some statistical power, the authors looked at all cause reoperation, some of which (i.e. repair of CSF leak, loss of reduction, hardware failure) are clearly not related to anticoagulation. Even in the heparin anticoagulation group, only two patients had epidural hematoma, which precludes any meaningful statistical analysis at this level. The paper certainly raises concern that therapeutic anticoagulation with a heparin drip is potentially risky, given the 31% reoperation rate in this group compared to 7% in the LMWH group and 10% in the control group. What is also striking is that anticoagulation began at a mean of 12 days after the index surgery, a time point at which one would not expect a high rate of bleeding at the surgical site regardless of anticoagulation method. The analysis did not look at the timing of anticoagulation to determine if earlier anticoagulation increased the risk of bleeding complications, but that seems likely to be the case. While the authors did control for certain factors in their propensity score matching, it is possible that other confounders were associated with the choice of anticoagulation agent and were driving some of the observed differences. Heparin may have been used preferentially earlier in the 14-year series, and other changes over time may have reduced the reoperation rate rather than a shift towards LMWH use. This is a nice first look at this topic, and it provides a cautionary tale about the use of therapeutic heparin following spine surgery. A future prospective study looking at a greater numbers of patients would be helpful, but that would likely require multiple sites with a long enrollment period.

Please read Dr. Shiu's article on this topic in the July 1 issue. Does this change your view of therapeutic anticoagulation for thromboembolic event following spine surgery? Let us know by leaving a comment on The Spine Blog.

​New neurological deficit is one of the most feared
complications of spine surgery, and it occurs more frequently with more complex
surgery. Adult deformity surgery now regularly involves correction of large
curves with 3 column osteotomies, maneuvers that put nerve roots and the spinal
cord at risk for iatrogenic injury. The real rate of these events in major
deformity surgery has been difficult to quantify due to the relatively low rate
of neurological injury and the low number of these cases that are performed at
any given center. To address this, the Scoli-RISK-1 study analyzed neurological
outcomes for 265 patients treated at 15 deformity centers around the world. All
patients underwent major deformity correction that involved a Cobb angle over
80 degrees, 3 column corrective osteotomy, revision surgery requiring
osteotomy, and/or deformity correction for spinal cord compression. The authors
calculated a lower extremity motor score (LEMS) from 0-50 (0-5 points for each
ASIA myotome for each lower extremity) at baseline, discharge, 6 weeks, 6
months, and 24 months after surgery. Twenty-five percent of patients had some
degree of motor deficit at baseline, and 98% had intra-operative neuromonitoring.
Sixty-one patients (23%) had a neurological decline at discharge, with 1/3 of
these patients having a drop in LEMS of 5 or more points (major decline) and
the other 2/3 having a drop of less than 5 points (minor decline). Of the
patients with motor function decline, 68% had full recovery by 6 months and 71%
by two years. That translates to 6% of patients sustaining a permanent
neurological decline following major adult deformity surgery. While the rate of
full recovery was similar for minor and major declines (74% vs. 67%), those
with minor declines were more likely to experience no recovery (18% vs. 6%).
Neuromonitoring detected changes in 27% of those sustaining major declines and
in 13% of those experiencing minor declines.

This study likely offers the largest prospective analysis of
neurological decline following major adult deformity surgery. Nearly 25% of
patients had a decline in neurological status at the time of discharge
following major adult deformity surgery, but only 6% had permanent deficits. The
surgeries included were all high risk, major deformity operations, most of
which included three column osteotomies. The authors should be congratulated on
their honest reporting of their neurological complication rate, and these data
can be used to counsel patients considering this type of surgery. While a 23%
rate of new neurological deficit at discharge seems high, it is somewhat
reassuring that the permanent rate is only 6%. The challenging aspect of
interpreting this study is that it is difficult to determine the specific
etiologies of the neurological declines, and there are likely a broad range of events
leading to the injuries. Different types of surgery likely have different risk
profiles, and different injuries have different prognoses. Subgroup analyses
looking at a more granular level would suffer from a lack of power, so the
overall result is likely the best the authors can do. The low reported
sensitivity of neuromonitoring is somewhat concerning, and is less than that
reported in other studies. While neuromonitoring is considered standard of care
for this type of surgery, if it detects only a minority of the injuries, its
utility can be questioned. This paper adds another piece of data that can be
shared with adult deformity patients considering major corrective surgery. While
the long-term benefits of deformity surgery can be significant, many patients
may not be willing to take on the risks and recovery associated with it.

Please read Dr. Kato’s article on this topic in the July 1
issue. Does this change how you view the risk of neurological decline following
major adult deformity surgery? Let us know by leaving a comment on The Spine Blog.
Adam Pearson, MD, MS
Associate Web Editor

​The internet has become the main source of consumer information across all markets, and healthcare is no exception. There are multiple for-profit physician review websites where people can anonymously provide physician reviews, and these are popular sites for patients choosing a healthcare provider. The extent to which these websites are used to rate spine surgeons has not been formally evaluated, so Dr. Zhang and colleagues from the University of Rochester analyzed the on-line reviews of 209 active spine surgeon members of the Cervical Spine Research Society. They found that all but one had at least rating on the five websites they evaluated, and the average surgeon was rated on three of the five sites. Overall, they found positive ratings, with the average surgeon receiving a normalized score of 80/100 (corresponding to 4/5 stars on most of the websites). They evaluated predictors of on-line ratings including geographic location, practice type (academic vs. private practice), gender, specialty (orthopaedics vs. neurosurgery), and years in practice. Academic surgeons had moderately but significantly higher ratings than those in private practice (82 vs. 78), and those with over 20 years of experience had lower ratings than those with 2-10 years of experience (77 vs. 84). The other factors were not significant predictors.

The role of on-line ratings in patient selection of a spine surgeon is poorly understood. Most physicians are aware of the ratings, though it seems as though a minority of physicians actively manage their on-line presence. This study makes it clear that spine surgeons are evaluated with similar frequency as other physicians, though it does not shed light on how patients or referring doctors use these ratings. After reading this article, I read my own on-line reviews that were present on 2 of the 5 websites. One site included 7 reviews and one 3 reviews. Overall, I had about a 4 star rating, similar to the average ratings in this article. Unlike medical journals that require a sufficient sample size prior to publishing results, physician review websites will publish a rating based on one review. While the number of reviews on which the rating is based is published, it appears in relatively small font compared to the rating itself. Given the low number of ratings generally available, physicians and patients should realize how easy it is to manipulate on-line ratings in either a positive or negative direction. The overall accuracy of on-line physician reviews is unknown, but ratings based on low numbers of reviews are at higher risk of inaccuracy or manipulation. This paper is a reminder to the spine surgery community that on-line ratings are here to stay. Physicians need to have a better understanding of how patients use these websites so they can devote appropriate resources to managing their on-line presence. Hopefully future research focused on patient perspectives of these sites will be forthcoming.

Please read Dr. Zhang's article on this topic in the June 15 issue. Does this change how you view the role of physician review websites? Let us know by leaving a comment on The Spine Blog.