The model. What it’s doing is best described as “SimCity without the graphics”. It attempts to simulate households, schools, offices, people and their movements, etc. I won’t go further into the underlying assumptions, since that’s well explored elsewhere.

Non-deterministic outputs may take some explanation, as it’s not something anyone previously floated as a possibility.

The documentation says:

“The model is stochastic. Multiple runs with different seeds should be undertaken to see average behaviour.”

“Stochastic” is just a scientific-sounding word for “random”. That’s not a problem if the randomness is intentional pseudo-randomness, i.e. the randomness is derived from a starting “seed” which is iterated to produce the random numbers. Such randomness is often used in Monte Carlo techniques. It’s safe because the seed can be recorded and the same (pseudo-)random numbers produced from it in future. Any kid who’s played Minecraft is familiar with pseudo-randomness because Minecraft gives you the seeds it uses to generate the random worlds, so by sharing seeds you can share worlds.

Clearly, the documentation wants us to think that, given a starting seed, the model will always produce the same results.

Investigation reveals the truth: the code produces critically different results, even for identical starting seeds and parameters.

I’ll illustrate with a few bugs.In issue 116 a UK “red team” at Edinburgh Universityreports that they tried to use a mode that stores data tables in a more efficient format for faster loading, and discovered – to their surprise – that the resulting predictions varied by around 80,000 deaths after 80 days:

That mode doesn’t change anything about the world being simulated, so this was obviously a bug.

The Imperial team’s response is that it doesn’t matter: they are “aware of some small non-determinisms”, but “this has historically been considered acceptablebecause of the general stochastic nature of the model”. Note the phrasing here: Imperial know their code has such bugs, but act as if it’s some inherent randomness of the universe, rather than a result of amateur coding. Apparently, in epidemiology, a difference of 80,000 deaths is “a small non-determinism”.

Imperial advised Edinburgh that the problem goes away if you run the model in single-threaded mode, like they do. This means they suggest using only a single CPU core rather than the many cores that any video game would successfully use. For a simulation of a country, using only a single CPU core is obviously a dire problem – as far from supercomputing as you can get. Nonetheless, that’s how Imperial use the code: they know it breaks when they try to run it faster.It’s clear from reading the code that in 2014 Imperial tried to make the code use multiple CPUs to speed it up, but never made it work reliably. This sort of programming is known to be difficult and usually requires senior, experienced engineers to get good results. Results that randomly change from run to run are a common consequence of thread-safety bugs. More colloquially, these are known as “Heisenbugs“.

But Edinburgh came back and reported that – even in single-threaded mode – they still see the problem. So Imperial’s understanding of the issue is wrong. Finally, Imperial admit there’s a bug by referencing a code change they’ve made that fixes it.The explanation given is “It looks like historically the second pair of seeds had been used at this point, to make the runs identical regardless of how the network was made, but that this had been changed when seed-resetting was implemented”. In other words, in the process of changing the model they made it non-replicable and never noticed.

Why didn’t they notice? Because their code is so deeply riddled with similar bugs and they struggled so much to fix them that they got into the habit of simply averaging the results of multiple runs to cover it up…and eventually this behaviour became normalised within the team.

In issue #30, someone reports that the model produces different outputs depending on what kind of computer it’s run on (regardless of the number of CPUs). Again, the explanation is that although this new problem “will just add to the issues”…“This isn’t a problem running the model in full as it is stochastic anyway”.

Although the academic on those threads isn’t Neil Ferguson,he is well aware that the code is filled with bugs that create random results. In change #107 he authored he comments: “It includes fixes to InitModel to ensure deterministic runs with holidays enabled”. In change #158 he describes the change only as “A lot of small changes, some critical to determinacy”.

Imperial are trying to have their cake and eat it. Reports of random results are dismissed with responses like “that’s not a problem, just run it a lot of times and take the average”, but at the same time, they’re fixing such bugs when they find them. They know their code can’t withstand scrutiny, so they hid it until professionals had a chance to fix it, but the damage from over a decade of amateur hobby programming is so extensive that even Microsoft were unable to make it run right.

No tests. In the discussion of the fix for the first bug, Imperial state the code used to be deterministic in that place but they broke it without noticing when changing the code.

Regressions like that are common when working on a complex piece of software, which is why industrial software-engineering teams write automated regression tests. These are programs that run the program with varying inputs and then check the outputs are what’s expected. Every proposed change is run against every test and if any tests fail, the change may not be made.

The Imperial code doesn’t seem to have working regression tests. They tried, but the extent of the random behaviour in their code left them defeated. On 4th April they said: “However, we haven’t had the timeto work out a scalable and maintainable way of running the regression test in a way that allows a small amount of variation, but doesn’t let the figures drift over time.”

Beyond the apparently unsalvageable nature of this specific codebase, testing model predictions faces a fundamental problem, in that the authors don’t know what the “correct” answer is until long after the fact,and by then the code has changed again anyway, thus changing the set of bugs in it. So it’s unclear what regression tests really mean for models like this – even if they had some that worked.

Undocumented equations. Much of the code consists of formulas for which no purpose is given. John Carmack (a legendary video-game programmer) surmised that some of the code might have been automatically translated from FORTRAN some years ago.

For example, on line 510 of SetupModel.cpp there is a loop over all the “places” the simulation knows about. This code appears to be trying to calculate R0 for “places”. Hotels are excluded during this pass, without explanation.

This bit of code highlights an issue Caswell Bligh has discussed in your site’s comments: R0 isn’t a real characteristic of the virus. R0 is both an input to and an output of these models, and is routinely adjusted for different environments and situations. Models that consume their own outputs as inputs is problem well known to the private sector – it can lead to rapid divergence and incorrect prediction. There’s a discussion of this problem in section 2.2 of the Google paper, “Machine learning: the high interest credit card of technical debt“.

Adding new features to a codebase with this many quality problems will just compound them and make them worse.If I saw this in a company I was consulting for I’d immediately advise them to halt new feature development until thorough regression testing was in place and code quality had been improved.

Conclusions.All papers based on this code should be retracted immediately.Imperial’s modelling efforts should be reset with a new team that isn’t under Professor Ferguson, and which has a commitment to replicable results with published code from day one.

On a personal level, I’d go further and suggest that all academic epidemiology be defunded. This sort of work is best done by the insurance sector. Insurers employ modellers and data scientists, but also employ managers whose job is to decide whether a model is accurate enough for real world usage and professional software engineers to ensure model software is properly tested, understandable and so on. Academic efforts don’t have these people, and the results speak for themselves.”

Devastating. Heads must roll for this, and fundamental changes be made to the way government relates to academics and the standards expected of researchers. Imperial College should be ashamed of themselves.

175 Reply

20 hours ago

Guest

Lms2

The UK government should be just as ashamed for taking their advice.And anyone in the media who repeated their nonsense.

80Reply

18 hours ago

Guest

Robert

The problem is the nature of government and politics. Politics is a systematic way of transferring the consequences of inadequate or even reckless decision-making to others without the consent or often even the knowledge of those others. Politics and science are inherently antithetical. Science is about discovering the truth, no matter how inconvenient or unwelcome it may be to particular interested parties. Politics is about accomplishing the goal of interested parties and hiding any truth that would tend to impede that goal.The problem is not that “government has being doing it wrong;” the problem is that government has been doing it.

15Reply

14 hours ago

Member

Mimi

Thank you so much for this! This code should’ve been available from the outset.

57 Reply

20 hours ago

Guest

Sean Flanagan

Amateur Hour all round! The code should have been made available to all other Profs & top Coders & Data Scientists & Bio-Statisticiansto PEER Review BEFORE the UK and USA Gvts made their decisions.Imperial should be sued for such amateur work.

56Reply

18 hours ago

Member

Caswell Bligh

This is an outstanding investigation. Many thanks for doing it – and to Toby for providing a place to publish it.

78 Reply

19 hours ago

Member

lesg

So this is ‘the science’ that the Government thinks is that it is following!

32 Reply

19 hours ago

Member

lesg

Member

ChrisH29

This is isn’t a piece of poor software for a computer game, it is, apparently, the useless software that has shut down the entire western economy.Not only will it have wasted staggeringly vast sums of money but every day we are hearing of the lives that will be lost as a result.We are today learning of 1.4 million avoidable deaths from TB but that is nothing compared to the UN’s own forecast of “famine on a biblical scale”. Does one think that the odious, inept, morally bankrupt hypocrite, Ferguson will feel any shame, sorrow or remorse if, heaven forbid, the news in a couple of months time is dominated by the deaths of hundreds of thousands of children from starvation in the 3rd World or will his hubris protect him?

90 Reply

19 hours ago

Member

speedy

I don’t understand why governments are still going for this ridiculous policy and NGOs allpretend it is Covid 19 that will cause this devastation RATHER than our reaction to it.

11Reply

11 hours ago

………………………………………….

EppingBlogger

Imperial and the Professor should start to worry about claims for losses incurred as a result of decisions taken based on such a poor effort. Could we know, please, what this has cost over how many years and how much of the Professor’s career has been achieved on the back of it.

17 Reply

1 day ago

Guest

Andy

Remember that Ferguson has a track record of failure:

in 2002 he predicted 50,000 people would die of BSE. Actual number: 178 (national CJD research and survellance team)In 2005 he predicted 200 million people would die of avian flu H5N1. Actual number according to the WHO: 78In 2009 he predicted that swine flu H1N1 would kill 65,000 people. Actual number 457.In 2020 he predicted 500,000 Britons would die from Covid-19.

Still employed by the government. Maybe 5th time lucky?

…………………………………….

Juan Luna

Ferguson should be retired and his team disbanded. As a former software professional I am horrified at the state of the code explained here. But then, the University of East Anglia code for modelling climate change was just as bad. Academics and programming don’t go together.

At the very least the Government should have commissioned a Red team vs Blue team debate between Ferguson and Oxford plus other interested parties, with full disclosure of source code and inputs.

I support the idea of letting the Insurance industry do the modelling. They are the experts in this field.

………………………………………………………

Guest

Simon Conway-Smith

Why any of this isn’t obvious to our politicians says a lot about our politicians, but your summary also shows that that it is ENGINEERs and not academics that should be generating the input to policy making. It is only engineers who have the discipline to make things work, properly and reliably.

54 Reply

19 hours ago

Guest

Chris Martin

This kind of thing frequently happens with academic research. I’m a statistician and I hate working with academics for exactly this sort of reason.

66 Reply

19 hours ago

Guest

skeptik

the global warming models are secret too (mostly) and probably the same kind of mess as this code

6Reply

15 hours ago

………………………………………………………………

ANNRQ

Perhaps, if enough people come to understand how badly this has been managed, they will start to ask the same questions of the climate scientists and demand to see their models published.

It could be the start of some clearer reasoning on the whole subject, before we spend the trillions that are being demanded to avert or mitigate events that may never happen.

21 Reply

1 day ago

Guest

Debster 1

These so called Climate scientists were asked to provide the data, but they come back and said they lost the data when they moved offices.

0 Reply

1 hour ago

Guest

Andy

Michael Mann pointedly refused to share his modelling code for climate change when he was sued for libel in a Canadian court. Ended up losing that will cost him millions. Now why would an academic rather lose millions of dollars than show their working?

Lets hope this “workings not required” doesn’t get picked up by schoolkids taking their exams 🙂

At the end of the article, there is “analysis” from a BBC health correspondent.

With such pitiful performance from the national broadcaster, I think Ferguson and his team will face no consequences.

8 Reply

1 day ago

Guest

el muchacho

LOL wat a load of crap, it’s the other way around: it’s Mann who sued.

“In 2011 the Frontier Centre for Public Policy think tank interviewed Tim Ball and published his allegations about Mann and the CRU email controversy. Mann promptly sued for defamation[61] against Ball, the Frontier Centre and its interviewer.[62] In June 2019 the Frontier Centre apologized for publishing, on its website and in letters, “untrue and disparaging accusations which impugned the character of Dr. Mann”. It said that Mann had “graciously accepted our apology and retraction”.[63] This did not settle Mann’s claims against Ball, who remained a defendant.[64] On March 21, 2019, Ball applied to the court to dismiss the action for delay; this request was granted at a hearing on August 22, 2019, and court costs were awarded to Ball. The actual defamation claims were not judged, but instead the case was dismissed due to delay, for which Mann and his legal team were held responsible”

-1 Reply

1 day ago

Guest

Another

Yes, Mann brought the case; on the other hand, it’s also correct that thecase was dismissed when he didn’t produce his code. 9 years after the case started.The step that caused the eventual dismissal of the case was that Mann applied for an adjournment, and the defendants agreed on the condition that he supplied his code. Mann didn’t do that by the deadline specified, and the case was then dismissed for delay. Mann did say he would appeal.

7 Reply

11 hours ago

Guest

el muchacho

No. Quite the opposite. This has bitten the climatologists in the butt with the so-called “climategate”. Congressional enquiries showed that their integrity was intact and that their methods were sound and followed standard scientific practice. But they lacked transparency, and therefore it was recommended that they should from now on make public all their numerical code and all their data. This has become widespread practice in climatology. In fact there is a guide of practice for climatologists:https://library.wmo.int/doc_num.php?explnum_id=5541

1 Reply

1 day ago

Guest

Simon Conway-Smith

It raises the questions (a) what other academic models that have driven public policy have such bad quality?, and (b) do the climate models suffer in the same way, also making them untrustworthy?

Similar skeptical attention should be paid to the credibility automatically granted to economic model projections – even for decades ahead.Economic estimates are routinely treated as factsby the biggest U.S. newspaper and TV networks, particularly if the estimates are (1) from the Federal Reserve or Congressional Budget Office, and (2) useful as a lobbying tool to some politically-influential interest group.

8 Reply

1 day ago

Guest

whatever

Academics are paid peanuts in the UK. It’s not the US with their 6 figure salaries.You need to teach 8+ hours, do your adminitrivia, and perhaps you’ll squeeze a couple of hours in for research at the end (or beginning) of a very long day. Nothing like Google, with its 500K salaries, and its code reviews. Sure non-determinism sucks but if the orders of magnitude of results fit expectations from other models, it’s good enough to compete with other papers in the field. Want to change that? Fund intelligent people in academia the way you fund lawyers and bankers. Oh, and managers in private industry will change results if it suits them, so “privatise it” is bollocks.

3 Reply

Guest

Jeremy Crawford

Just wonderful and sadly utterly devastating. As an IT bod myself and early days skeptic this was such a pleasure to ŕead. Well done

54 Reply

19 hours ago

Guest

Mike Haseler

Thanks for doing the analysis. Totally agree that leaving this kind of job to amateur academics is completely non sensical. I like your suggestion of using the insurance industry and if I were PM I would take that up immediately.

30 Reply

19 hours ago

…………………………………………………………….

……….

Ben Grove

I’m afraid Ferguson is a very small part of the plan, and merely doing what he was hired for….

…………………………………………………………

Robert Borland

-Academic science has not fallen victim to capitalism, it has fallen victim to bureaucracy and conformity; if you do not conform to espouse expected and required outcomes you are labeled as a pariah, demonised and excluded. Evidence contradicting official policy is suppressed, falsified, or rationalised away….

In the example of these pandemic modelling disasters, the paradigm shift would be to exclude modelling as an influence on government policy, and the manias that can result….

-In this most recent marriage of political power and ‘modelling’ catastrophe, the solution has been to just come up with yet another model and to rationalisewhatever policy implemented as having been necessary; politicians will rarely if ever admit error of a policy course no matter what the cost, whether lives or money.

……………………………………………………..

Guest

Andy Riley

Look at SetupModel.ccp from line 2060 – pages of nested conditionals and loops with nary a comment. Nightmare!

20 Reply

18 hours ago

Guest

Alicat2441

Haven’t time to read the article and stopped at the portion where the data can’t be replicated. That right there is a huuuuuuge red flag and makes the “models” useless. I’ll come back tonight to finish reading. I have to ask: Is this the same with the University of Washington IMHE models?. Why do I have a sneaking suspicion that it is.

11 Reply

18 hours ago

Member

Laurence_R

The IMHE [Bill Gates] ‘model’ is much worse – it’s just a simple exercise in curve fitting, with little or no actual modelling happening at all.I have collected screenshots of its predictions (for the US, UK, Italy, Spain, Sweden) every few days over the last few weeks, so I could track them against reality, and it is completely useless. But, according to what I’ve read, the US government trusts it!

Until a few days ago, its curves didn’t even look plausible – for countries on a downward trend (e.g. Italy and Spain), they showed the numbers falling off a cliff and going down to almost zero within days, and for countries still on an upward trend (e.g. the UK and Sweden) they were very pessimistic. However, the figures for the US were strangely optimistic – maybe that’s why the White House liked them.

They seem to have changed their model in the last few days – the curves look more plausible now. However, plausible looking curves mean nothing – any one of us could take the existing data (up to today) and ‘extrapolate’ a curve into the future. So plausibility means nothing – it’s just making stuff up based on pseudo-science. In the UK, we’re not supposed to dissent, because that implies that we don’t want to ‘save lives’ or ‘protect the NHS’, so the pessimistic model wins. In the US, it’s different, depending on people’s politics, so I’m not going to try to analyse that.

So why do governments leap at these pseudo-models with their useless (but plausible-looking) predictions?...If there are competing crystal balls from different academics, the government will simply pick the one that matches its philosophy best, and claim that it is ‘following the science’.

9Reply

9 hours ago

……………………………………………………

………..

Simon Conway-Smith

They leap at them for fear of the MSM accusing them of not doing anything.

I had hoped Donald Trump would be a stronger leader than that, and insisted on any model being independently and repeatedly verified before making any decision.

The other factor that seems entirely missing from the models is the ability of existing medicines, even off-label ones, to treat the virus, and there have been many trials of Hydroxy Chloroquine with Zinc sulphate (& some also with Azithromycin) that have demonstrated great success.It constantly dismays me that this is ignored, and here in the UK, patients are just given paracetamol; as if they have a headache!!…..1 day ago

…………………………………………………………

Member

Robin66

This is scary stuff. I’ve been a professional developer and researcher in the finance sector for 12 years. My background is Physics PhD. I have seen this sort of single file code structure a lot and it is a minefield for bugs. This can be mitigated to some extent by regression tests but it’s only as good as the number of test scenarios that have been written. Randomness cannot just be dismissed like this. It is difficult to nail down non-determinism but it can be done and requires the developer to adopt some standard practices to lock down the computation path. It sounds like the team have lost control of their codebase and have their heads in the sand. I wouldn’t invest money in a fund that was so shoddily run. The fact that the future of the country depends on such code is a scandal.

60 Reply

18 hours ago

Member

dr_t

Ferguson’s code is 30 years old. This review criticizes it as though it was written today, but many of these criticisms are simply not valid when applied to code that’s 30 years old. It was normal to write code that way 30 years ago. Monolithic code was much more common, especially for programs that were not meant to produce reusable components….

It’s perfectly normal not to want to disclose 30 year old codebecause, as has been proven by this very review, people will look at it and criticize it as if it was modern code.

So Ferguson evidently rewrote his program to be more consistent with modern coding standards before releasing it. And probably introduced a couple of bugs in the process. Given the fact that the original code was undocumented, old, and that he was under time pressure to produce it in a hurry, it would have been strange if this didn’t introduce some bugs. This does not, per se, invalidate the model….

Stochastic models and Monte Carlo simulation are absolutely standard techniques. They are used by financial institutions, they were used 30 years ago for multi-dimensional numerical integration, they are used everywhere….

24 Reply

18 hours ago

Guest

MFP

I read the author’s discussion of the single-thread/multi-thread issue not so much as a criticism but as a rebuttal to possible counter-arguments. I agree it probably should have been left out (or relegated to a footnote), but the rest of the author’s arguments stand independently of the multi-thread issues.

I disagree with your framing of the author’s other criticisms as amounting to criticism of stochastic models. It does not appear the author has an issue with stochastic models, but rather with models where it is impossible to determine whether the variation in outputs is a product of intended pseudo-randomness or whether the variation is a product of unintended variability in the underlying process.

21Reply

16 hours ago

Guest

Paul Penrose

dr_t,I am also a Software Engineer with over 35 years of experience, so I understand what you are saying as far as 30 year old code, however if the software is not fit for purpose because it is riddled with bugs, then it should not be used for making policy decisions.And frankly I don’t care how old the code is, if it is poorly written and documented, then it should be thrown out and rewritten, otherwise it is useless.

As a side note, I currently work on a code base that is pure C and close to 30 years old. It is properly composed of manageable sized units and reasonably organized. It also has up to date function specifications and decent regression tests. When this was written, these were probably cutting-edge ideas, but clearly wasn’t unknown. Since then we’ve upgraded to using current tech compilers, source code repositories, and critical peer review of all changes.

So there really is no excuse for using software models that are so deficient. The problem is these academics are ignorant of professional standards in software development and frankly don’t care. I’ve worked with a few over the course of my career and that has been my experience every time.

46Reply

16 hours ago

Guest

skeptik

I agree 100%, I wrote c/c++ code for years and this single file atrocity reminds me of student code

5Reply

15 hours ago

Guest

Neil

The fact it wasn’t refactored in 30 years is a sin plain and simple.

7Reply

12 hours ago

……………………………………………..

Guest

Guest

dodgy geezer

I was coding on a large multi-language and multi-machine project 40 years ago. This was before Jsckson Structured Programming, but we were still required to document, to modularise, and to perform regression testing as well as test for new functionality. These were not new ideas when this model was originally created.

The point of key importance is that code must be useful to the user.This is normally ensured by managers providing feedback from the business and specifying user requirements in better detail as the product develops.And this stage was, of course, missing here.

Instead we had the politicians deferring to the ‘scientists’, who were trying out a predictive model untested against real life. That seems to have worked out about as wellas if you had sacked the sales team of a company and let the IT manager run sales simulations on his own according to a theory which had been developed by his mates…

5Reply

6 hours ago

Guest

Robbo

Testing is already indicating that huge numbers of the global population have already caught it. The virus has been in Europe since December at the latest, and as more information comes to light, that date will likely be moved significantly backwards. If the R0 is to be believed, the natural peak would have been hit, with or without lockdown, in March or April.That is what we have seen.This virus will be proven to be less deadly than a bad strain of influenza, with or without a vaccinated population. Total deaths have only peaked post lockdown. That is not a coincidence.

14Reply

16 hours ago

Guest

Bumble

This model assumes first infections at least two months too late. The unsuppressed peak was supposed to be mid May (the ‘terrifying’ graph) so what we have seen in April is likely the real peak and lockdown has had no impact on the virus. Lockdown will have killed far more people.…

5Reply

14 hours ago

Guest

SteveBPeak deaths in NHS hospitals in Englandwere 874 on [4/08] 08/04.A week earlier, on [4/01] 01/04, there were 607 deaths. Crude Rt = 874/607 = 1.4. On average, a patient dying on [4/08] 08/04 would have been infected c. 17 days earlier on 22/03. So, by [3/22] 22/03 (before the full lockdown), Rt was (only) approx 1.4.Ok, so that doesn’t tell us too much, but if we repeat the calculation and go back a further week to [3/15] 15/03, Rt was approx 2.3. Another week back to [3/08] 08/03 and it was approximately 4.0.

Propagating forward a week from [3/22] 22/03, Rt then fell to 0.8 on [3/29] 29/03

So you can see that Rt fell from 4.0 to 1.4 over the two weeks preceding the full lockdown and then from 1.4 to 0.8 over the following week, pretty much following the same trend regardless.

So, using the data we can see that we could have predicted the peak before the lockdown occurred,simply using the trend of Rt.

In my hypothesis, this was a consequence of limited social distancing (but not full lockdown) and the virus beginning to burn itself out naturally, with very large numbers of asymptomatic infections and a degree of prior immunity.

1Reply

1 hour ago

………………………………………………………….

silent one

What are the deaths of those that have died FROM covid 19 and how are those written on the death certificatesand how is it that those that die of a disease other than covid 19 are also included as covid 19 deaths when they were only infected by covid 19. As we know there are asymptomatic carriers so there MUST be deaths were they had the covid but that it was not a factor in those deaths but were included on the death certificate. The numbers of deaths that have been attributable to covid 19 have been over-inflated. Never mind that the test is for a general coronavirus and not specific to covid 19.

Right, but you’re comparing apples to oranges. Compare Covid-19 to other pandemics, like 1917, 1957, or 1968.

0 Reply

1 day ago

Guest

Chebyshev

May be it is not “despite” but “because of”? If you start the lockdown as late as March, then you ensure that infection and death rates are going to be higher because of high dosage and fragile immune system that comes from lockdown.

There are plenty of countries without lockdown to compare against. So it is not an unverifiable hypothesis.

1 Reply

1 day ago

Guest

Epictetus

Yes but the manner in which they count COVID-19 deaths is flawed. Even with co-morbidity they ascribe to COVID, and in cases where they do not test but there were COVID-like symptoms, they ascribe it to COVID according to CDC.

7 Reply

1 day ago

Guest

Bazza McKenzie

Most governments are busily fudging the numbers up, to ex-post “justify” the extreme and massively damaging actions they imposed on communities and to gain financial benefit(e.g. states and hospitals which get larger payouts for Wuhan virus treatment than for treatment for other diseases).

As with “global warming”, the politicians, bureaucrats and academics are circling the wagons together to protect their interlinked interests.

Epidemic curve are flat or down in so many countries with such different mitigation policies that it’s hard to say this policy or that made big difference, aside from two – ban all international travel by ship or airplane and stop mass transit commuting. No U.S. state could or did so either, but island states like New Zealand could and did both. In the U.S., state policies differ from doing everything (except ban travel and transit) to doing almost nothing (9 low-density Republican states, like Utah and the Dakotas). But again, Rt is at or below in almost all U.S. states, meaning the curve is flat or down.Policymakers hope to take credit for something that happened regardless of their harsh or gentle “mitigation” efforts, but it looks like something else –such as more sunshine and humidity or the virus just weakening for unknown reasons (as SARS-1 did in the U.S. by May).https://rt.live/”

…………………………………………………

LorenzoValla

As an academic, I would expect you to be appalled that the program wasn’t peer reviewed….

All of the modern standards (modularization, documentation, code review, modularization, unit and regression testing, etc.) are standards because they are necessary to create a trustworthy and reliable program.This is standard practice in the private sector because when their programs don’t work, the business fails. Another difference here is that when that business fails, the program either dies with it or is reconstituted in a corrected form by another business. In an academic setting, it’s far more likely that the failure will be blamed on insufficient funding, or that more research is required, or some other excuse that escapes blame being correctly applied…..1 day ago

I know nothing about the coding aspects, but have long harboured suspicions about Professor Ferguson and his work. The discrepancies between his projections and what is actually observed (and he has modelled many epidemics) is beyond surreal! He was the shadowy figure, incidentally, advising the Govt. on foot and mouth in 2001, research which was described as ‘seriously flawed’, and which decimated the farming industry,via a quite disproportionate and unnecessary cull of animals.

I agree with the author that theoretical biologists should not be giving advice to the Govt. on these incredibly important issues at all! Let alone treated as ‘experts’ whose advice must be followed unquestioningly. I don’t know what the Govt. was thinking of. All this needs to come out in a review later, and, in my view, Ferguson needs to shoulder a large part of the blame if his advice is found to have done criminal damage to our country and our economy. This whole business has been handled very badly, not just by the UK but everyone, with the honourable exception of Sweden.

I’m not sure that the code we can see deserves much detailed analysis, since it is NOT what Ferguson ran.It has been munged by theoretically expert programmers and yet it STILL has horrific problems.

…………………………………………

Eric B Rasmusen

The biggest problem…is not making the code public. I’m amazed at how in so many fields it’s considered okay to keep your data and code secret. That’s totally unscholarly, and makes the results uncheckable.

………………………………………………………

Annette Jones

I am a lay person who does not understand computer modelling….but for such huge decisions to be made without adequate peer review of the data is shocking.

…………………………………….

LorenzoValla

The bottom line is that if the recommendations from a computer program are going to be used to make decisions that significantly affect the daily lives of millions of people, the friggen program absolutely needs to be as solid as possible, which includes frequent code review, proper documentation, and in-depth testing. Then, it needs to be shared for peer review.

…………………………………..

Anne

Here are the results of Professor Ferguson’s previous modelling efforts.

This is stunning in how awful this all is. The word criminal comes to mind. Thank you so much for this assessment.

…………………………………………

Thomas

Are the mainstream media capable of covering this? That is what frightens me.

Who is going to be the first to point out that the reason sick peoples weren’t getting hospital beds is because the models were telling us to expect thousands more sick people than there were?How many people died because of this?

And what about all this new normal talk? All these assumptions life will change for ever built on fantastic predictions which are being falsified by Swedish and Dutch data?

This diktat that we can’t set free young people who are not threatened by the virus because the model says hundreds of thousands would die? All nonsense.

The infamous “Harry_Read_Me” file contained in the original Climate Gate release springs to mind. As I recall, it was a similar tale of a technician desperately trying to make sense of terrible software & coding being used by the “Climate Scientists” – one of whom had to ask for help using Excel…

6 Reply

2 days ago

Guest

dodgy geezer

MUCH more politics in Climate Change! You are simply not allowed to question the basic assumptions..

That’s lightning fast. Vaccines typically take years (or in some cases, decades) to develop,…

… The speed is made possible by a new technology: mRNA vaccines, … mRNA vaccines work kind of like a computer program: After the mRNA “code” is injected into the body, it instructs the machinery in your cells to produce particular proteins. Your body then becomes a vaccine factory, producing parts of the virus that trigger the immune system. In theory, this makes them safer and quicker to develop and manufacture, …

… Bancel isn’t the only optimist. In the past 20 years, there’s been an explosion of companies developing mRNA vaccines for a large swathe of diseases, and many have turned their attention towards the COVID-19 pandemic. German company BioNTech is working with Pfizer to develop an mRNA vaccine. Human trials have already begun. Another German company, CureVac, is backed by the Gates Foundation and is expected to begin vaccine trials this summer. Lexington, Massachusetts-based Translate Bio has partnered with French pharmaceutical giant Sanofi to develop its mRNA vaccine, with human trials expected to start later this year. …

I, for one, will decline the offer of any such vaccine, and if necessary, resist its legislated imposition on my person to the point of imprisonment in the confidence that when under-informed members of the public who naively agree to be afflicted by such quackery begin suffering ill effects in numbers too large to be ignored, I will have grounds for an appeal.

0 Reply

7 hours ago

Member

John

That mRNA and other related initiatives are funded by Bill Gates supplies further pause as to the intent involved:

“Taking their cue from Gates they agreed that overpopulation was a priority,”