Asymptomatic bacteriuria was defined as the isolation of cultivatable microorganisms without the presence of symptoms and signs suggestive of UTI. For convenience, and in accordance with previous reports [
1
], the term “bacteriuria” was used to describe the presence of bacteria or fungi throughout the manuscript, as funguria cases are commonly included in previously published analyses and could not be studied separately as no distinct rates were provided [
1
,
12–14
]. Instead, as detailed below, we performed a subgroup analysis without the studies that included funguria cases.

We did not employ specific cutoffs for clinically significant growth or impose restrictions on the number of organisms isolated. This broad definition was employed to ensure that inappropriate treatment of ASB was adequately captured, regardless of whether the ASB case in question stemmed from colonization or sample contamination. Patients with bacteriuria and symptoms/signs suggestive of infection were only considered to have ASB when the signs/symptoms could be clearly attributed to a different cause. In line with the IDSA recommendations [
8
], antimicrobial treatment of ASB was considered appropriate for pregnant patients and patients undergoing urologic procedures with a high likelihood of bleeding. For our secondary analysis, pyuria was defined as the presence of at least 5 white blood cells (WBC)/high-powered field (HPF) whereas hematuria was defined as the presence of at least 10 red blood cells (RBC)/HPF.

Our primary outcome of interest was the rate of inappropriate treatment of ASB and was calculated by dividing the number of ASB cases in which antimicrobial therapy was inappropriately prescribed by the total number of ASB cases in which no treatment was warranted, according to the IDSA guidelines [
8
]. As a secondary outcome, we sought to identify factors that were associated with ASB overtreatment in the included study cohorts.

Two researchers (MF, AK) extracted data independently; discrepancies between them were resolved by consensus, and data are summarized in
Table 1
and Supplementary Table 2. The methodological quality of eligible studies was assessed with the Newcastle–Ottawa Quality Assessment Scale (NOS) (see Supplementary Table 1) [
15
]. For each study parameter that met adequate quality standards, a star was awarded to the study. Studies that were judged to be of adequate quality in all 6 of the examined study parameters were awarded a maximum of 6 stars. An NOS rating of 5 stars or higher was considered adequate for the purposes of our analysis.

Table 1.

Characteristics of Included Studies: Study Midyear and Country, Design, Number of ASB Cases That Were Treated Inappropriately, Number of ASB Cases That Did Not Require Treatment, Prevalence of Overtreatment, Study Setting, Cutoff Used for Screening, Age and Sex of Participants

Abbreviations: ASB, asymptomatic bacteriuria; ED, emergency department; ESBL, extended-spectrum beta-lactamases; ICU, intensive care unit; NR, not reported; , ; UTI: irinary tract infections; VRE: vancomycin-resistant enterococcus. *The lower count was used as a cut-off for the purposes of the respective sub-analysis. Only the portion of the cohort that used the 100 000 cfu/mL cut-off was included in the respective sub-analysis. In all other cases, the full cohort was used. The majority of the cohort used this cut-off, so it was employed for the respective sub-analysis.

Characteristics of Included Studies: Study Midyear and Country, Design, Number of ASB Cases That Were Treated Inappropriately, Number of ASB Cases That Did Not Require Treatment, Prevalence of Overtreatment, Study Setting, Cutoff Used for Screening, Age and Sex of Participants

Abbreviations: ASB, asymptomatic bacteriuria; ED, emergency department; ESBL, extended-spectrum beta-lactamases; ICU, intensive care unit; NR, not reported; , ; UTI: irinary tract infections; VRE: vancomycin-resistant enterococcus. *The lower count was used as a cut-off for the purposes of the respective sub-analysis. Only the portion of the cohort that used the 100 000 cfu/mL cut-off was included in the respective sub-analysis. In all other cases, the full cohort was used. The majority of the cohort used this cut-off, so it was employed for the respective sub-analysis.

The meta-analysis was conducted using the random-effects model, proposed by DerSimonian and Laird, to estimate the pooled rates and 95% confidence intervals of the primary outcome [
16
]. The Freeman-Tukey arcsine transformation was employed to ensure the inclusion of studies that reported extreme rates (rates near 0 or 1) [
17
]. The tau-squared statistic was calculated to assess the heterogeneity between the studies, and possible sources of heterogeneity were further explored by meta-regression analysis (Knapp and Hartung model) [
18
]. Using this methodology, we performed a temporal sensitivity analysis that included only the studies performed in 2006 and after to assess the effect of the introduction of the IDSA guidelines on the reported rates. A subgroup analysis without the studies that included funguria cases was also conducted. We used Egger’s regression test (ET) as an indicator of small study effect [
19
]. A time-trend analysis was performed by transforming the model coefficients to rates and plotting them against the median year along with the reported prevalence rates [
20
].

Co-founder of Ably: Simply better realtime messaging. In the past was the CTO co-founder of Econsultancy that was sold in 2012.

We’ve bet on well supported open source projects like Google’s V8. Following an upgrade from Node.js v6 to v8, this bet has paid off. Our latencies are more consistent and our global infrastructure server costs have gone down by almost40%.

Whilst we are continuously optimizing our infrastructure running costs, there is always a trade-off between allocating engineering resource to focus on new revenue (features) vs reducing costs (optimizing performance).

In the majority of cases, for our high growth tech business, revenue growth is where our focus lies. Fortunately though, as we have bet on a number of underlying technologies that are incredibly well supported by the community, we continue to get material cost reductions without much engineering effort.

Case in point is a recent Node.js upgrade from v6 to v8. As an effect of that, we have seen two significant improvements:

Under load, performance is less spikey and more predictable

In the graph below, you’ll see that in our clusters containing 100s of nodes, during the busiest times we saw nodes spiking to nearly 100% for brief periods in spite of the mean CPU utilization sitting at around the 50% mark.

Yet once we completed the upgrade to Node.js v8, with comparable load on the cluster, we see far more predictable performance without the spikes:

We can speculate what changes in the underlying are responsible for this, but in reality the V8 JavaScript engine is improving on many fronts (specifically the compiler, runtime and garbage collector) which all collectively play a part.

Bang for our buck has vastly improved: circa 40% real worldsaving

When we performed load testing in our lab on Node.js v6 vs Node.js v8, we saw that in the said region, there was a 10% increase in performance. This is not all that surprising as
Amanda floralprint cotton dress Anjuna 2018 Newest Online wVW8kTWS
. However, once we tested v8 in one of our isolated clusters servicing real world production traffic, the benefits were far more significant. Whilst
Google V8’s TurboFan and Ignition
gave us the ability to increase the rate of operations on the same underlying hardware, the improvements mentioned above (that made the performance more predictably smooth on each node) gave us more confidence in regards to the true spare capacity we had in each cluster. As such, we were able to run with less nodes under most conditions.

As you can see below, in one of our busier clusters running Node.js v8, we were able to reduce our raw server costs by circa 40–50%:

If performance matters, then bet on technology that has the engineering muscle and drive to continuously optimize, so you don’t haveto.

Whilst the benefits we experienced from this upgrade could be considered to be a lucky win for us, we don’t see it that way. Building an Internet-scale system without Google-scale resources, requires a strategic approach to your technology choices. If you focus on projects that have a community of engineers focused on improving performance, then you’re bound to have some luck along the way.

One bet we took when choosing Node.js, was that over time, it would continue to get faster, and it has, significantly so. That’s no surprise, of course, given the Google V8 engine is used in their Chrome browser. Since 2015, there’s been an average of around 20,000 lines of code changing each week in the V8 engine. That’s a mammoth amount of effort from a highly skilled engineering team.

We’ve made similar bets with other technologies we’ve chosen, which also have a large group of contributors focused not just on features, but also on continuous performance optimizations such as: