Catastrophic Meltdown of Canadian Ice…

Cores!

Guest post by David Middleton

A precious collection of ice cores from the Canadian Arctic has suffered a catastrophic meltdown. A freezer failure at a cold storage facility in Edmonton run by the University of Alberta (UA) caused 180 of the meter-long ice cylinders to melt, depriving scientists of some of the oldest records of climate change in Canada’s far north.

The 2 April failure left “pools of water all over the floor and steam in the room,” UA glaciologist Martin Sharp told ScienceInsider. “It was like a changing room in a swimming pool.”

The melted cores represented 12.8% of the collection, which held 1408 samples taken from across the Canadian Arctic. The cores hold air bubbles, dust grains, pollen, and other evidence that can provide crucial information about past climates and environments, and inform predictions about the future.

The storage facility is normally chilled to –37°C. But the equipment failure allowed temperatures to rise to 40°C, melting tens of thousands of years of history. Among the losses: some of the oldest ice cores from Mount Logan, a 5595-meter-high mountain in northern Canada. “We only lost 15 meters [of core], but because it was from the bottom of the core, that’s 16,000 years out of the 17,700 years that was originally represented,” Sharp says.

Scientists also lost 66 meters of core from Baffin Island’s Penny Ice Cap, which accounts for 22,000 years—a quarter of the record. That leaves “a gap for the oldest part, which is really the last glaciation before the warming that brought us into the present interglacial,” Sharp says.

Investigation points to two malfunctions

An investigation into the freezer malfunction found fault with the cooling system. Specifically, the refrigeration chillers shut down due to “high head pressure” conditions. Essentially, the chillers were not able to reject their heat through the condenser water system—heat instead of cold circulated through the freezer.

Compounding matters, the system monitoring the freezer temperatures failed due to a database corruption. The freezer’s computer system was actually sending out alarm signals that the temperature was rising, but those signals never made it to the university’s service provider or the on-campus control centre.

In the short term, refrigeration technicians are monitoring the freezers through twice-daily checks, Sharman said. The computer database corruption was resolved by adding a second monitoring controller, which is now issuing real-time messaging updates every eight hours.

In the real world, anything that important is not only monitored using multiple independent systems, but regular tests are made of all components to ensure that they are working correctly.
For example, you would throw a switch on the thermometers to cause then to send out alarm signals and it would be verified that all the proper authorities received a notification in a timely manner.

It was sheer incompetence to not have redundancy in the cooling system and the monitoring system, Maybe even more incompetent to not have split the cores (lengthwise [timewise]) in tow and store them in two different facilities. Sort of like the CRU losing the raw data because they couldn’t afford a server. If you are spending tons of our (taxpayers’), at least show some sense and spend it wisely. But, of course, these are scientists, not engineers – never, ever consider the possibility of failure and provide for it.

Incompetence?… I can just imagine the newly indoctrinated, scientifically trained climate scientists getting their new ice cores. Get to the top of a high, snow covered mountain, find a nice slope of appx. 45 degrees, start drilling down the side of the mountain slope at 45 degrees and there’s yer new ice core.

New Democrat Party (NDP) socialist/commie gov’t elected less than 2 years ago has a goal of outright destroying Alberta. Very similar and maybe even worse than South Oz…a real basket case of governance.
This is the Gov’t that had just cut rapid initial attack for forest fires after being elected, and then Ft. McMurray burns down and they try and blame it on climate change. Sad.

Ron Williams says “This is the Gov’t that had just cut rapid initial attack for forest fires after being elected”

I disagree with Alberta NDP on nearly everything but firefighting money ALWAYS comes out of emergency budget due to the wild swings in resources required from year to year.. Their claims that fires had anything to do with climate are as you describe it, sad. But you can not claim that budget shuffling in any way impeded fighting the Ft. Mac fire. The real culprit was dry, warm winds and poor fuel load management. After that, all the money in the world cant stop a fire like that as any seasoned fire fighter will tell you.

Sorry Dave in Canmore, you are outright wrong on this and ask why you are providing cover for NDP in Alberta? A little biased are we? This story below was published April 19th, 2016, and the Fort Mac fire started May 1st/16 causing $3.58 billion (insured damages); and $9.5 billion (direct and indirect costs) which was almost equal to the entire budget deficit of over $10 Billion this fiscal year. It is absolutely despicable that any Gov’t would cut budgets to fire fighting, but this is what happened. Don’t lie for the NDP Dave. The result of the cutbacks caused the confusion that led to a momentary delay when they could have had a chance at knocking that fire down.

Furthermore, it was absolutely declared as a human caused fire, with lightening having been completely ruled out. Some theorize it was a wildfire 15 Km out of town that was the cause of the fire. But a second fire was already burning within town in the north end near a garbage dump, and resources WERE NOT immediately dispatched to it, but were dispatched to the the fire burning 15 Km out of town.

“Complicating matters was a second fire burning at the same time within Fort McMurray near an industrial area in the city’s north end. That fire was moving up a hill towards a row of houses on May 1. Mr. Spring says that fire crews had to decide which to tackle first: the fire in a remote area south of town or the one bearing down on homes.

Video from May 1 shows tankers dropping fire retardant in Fort McMurray as helicopters poured buckets of water. Crews had decided to confront the blaze burning in the industrial north end–because it was first spotted within the city it doesn’t have a wildfire name like MWF-009.

“The choice had to be made between fire 009 and that second fire headed towards houses. Five out of five times anyone would choose to go after the second fire,” Mr. Spring said.

His company wasn’t asked to dispatch a helicopter to MWF-009 on May 1. Within two hours that fire had grown to 60 hectares, fed by strong winds. The first evacuation notices went out before dusk that evening.

Fort McMurray’s 80,000 residents were evacuated two days later, on May 3.”

Gosh! Surely you not hinting at some sort of conspiracy rather than cock-up! That would be a foul slander on scientists who we know are of the highest integrity. I am shocked – shocked that any such suggestion could even be conceived. let alone expressed in public. I have never heard of
(Continued on page 94)

It’s worse than we thought. Global warming is now causing ice cores to melt inside of freezers. The warming has truly become “global.” We will have to move freezers off planet if we want to keep things frozen.

I can tell you those things make researchers go more than Hmmm. Freezer failures were a bain where I was facility manager for a state college with research facilities. The problem was that the low bid got the sale, so we had cheap stuff.

The biggest problem is that they outlawed the only refrigerants that work in a cascading cryo-system without reaching ridiculously high pressures. newer refrigeration equipment is much more prone to freon leak-down failures.

I’m a software developer. It simply is not smart to rely on software for something this important, if there are other options.

Software’s greatest risk of malfunction occurs when something unusual happens – the code to handle unusual situations is rarely executed and difficult to test, so it is the part of the system most likely to harbour undetected defects.

It’s not that hard to test, all you need is a proper test rig. Either the hardware is set up to use a variable resistor to represent the temperature measuring unit (or voltage input if that is the kind you are using).
Then switches and lights to represent the other inputs and outputs of the system. Then you can completely exercise even the most unlikely of scenarios.

I’ve built systems like that this to emulate nuclear power plants. A single thermometer for a single room should be trivial.

Mark,
Yes, I built a small model of a grain mill once that I was rewriting the control software for (because the original developer was incompetent). All the sensors were represented with simple switches and the motors/actuators with LEDs. This allowed me to test every conceivable failure and operating condition. The new software not only was reliable, but also helped find faults with the hardware (bad relays, bypass diodes, etc.).

The kind of software monitoring mentioned in the article is quite simple by comparison, and not that hard to test, as you mentioned. Testing the complete system from time to time would have uncovered the corrupted database and probably saved the ice cores. There really is no excuse for this level of incompetence. The solutions for these kinds of problems are well known and commonly implemented.

Perhaps having an analogue back-up alarm independent of any digital higher tech, would at least alert the problem to those responsible for the maintenance. If it had its own redundant back-up analogue alarm separately with its own sensor etc, in conjunction with the main digital alarms, then the analogue alarm bells would have been ringing the problem regardless of the high tech monitoring equipment. Sort of like having an analogue volt meter wired independently into a electrical panel for a visual back-up of a digital meter. (in case the display burns out) Those damn cosmic rays…

The first shuttle had 3 computers that did all flight calculations. The calculations were compared against each other before any decision was made. Two of the computers were made by one manufacturer and the 3rd was made by a different one. This helped to ensure that all three computers wouldn’t make the same mistake because they all had the same software bugs.

I come form an aviation background with computer-controlled, FBW / FBL, uninhabited aricraft. We ALWAYS include software in our flight control systems to “handle the unusual situations” – at least all the ones we can anticipate. That’s the purpose of safety cases. You are right, it is difficult to test, but it is better than a smoking hole in the ground.

And I have to disagree – unless you are working in AI with fuzzy logic, software never malfunctions – give it the same inputs, you get the same outputs. Now, in my experience, failures owing to hardware inadequacies have often been blamed on “software glitches”. Ain’t fair. Also, of course, if you chosen to use a computer language known for inadequate design, such as doing garbage collection poorly, or not cleaning up unused allocations, or handling thrown exceptions poorly, then maybe you can call that software failure, as it is a failure in the language choice / design. But there are languages out there that can be used and are better designed / implemented.

Now, having disagree, it is merely a gentleman’s disagreement – please keep up your articles.

Tge fact is that random errors can and do occur. Most are probably the result of stray cosmic particles, but we obviously cannot be certain. Others are, as you say, the results of programming errors, and as software becomes increasingly complex, they are more difficult to diagnose. As we start getting AI to develop software, this will get much worse.

The best solution is multiple identical redundant systems, allowing a consensus. I believe this is what is done in space missions. This cannot rule out hardware faults, however, and as I understand it, the Intel 386 chip is the only verified bug-free hardware in common use.

While working at a large avionics company, I had to write the software that performed the power on self tests for the error detection circuitry for the memory modules. There was a trick that allowed us to deactivate the circuitry long enough to write bad data to the memory locations, then when we read it back, it would trip the circuitry.
To the best of our ability, every circuit in the hardware was validated each time the system was powered up.
PS: If you think Unix takes a long time to boot up, you should have seen these systems.

As a fellow s/w developer, I tend to agree with you in general. But s/w specifically designed to monitor temperature not being tested to detect high temperatures or complete loss of data? Kinda hard to believe.

Eric, this is exactly the sort of thing fire alarm systems handle all the time with great reliability. Commercial fire alarms that report to a central station send a test signal every day. Under the latest code a test signal will be required 4x a day. There is no excuse for not using some similarly reliable system to monitor the situation.

Keep in mind – the ice is still there, albeit not in the warmed up freezer, but Baffin island is still there and the bottom of the ice is still reachable! All is not “lost” – just what had been archived!

My thoughts, too, tom … it’s not as if those glaciers are going anywhere for the next few thousand years, at least … particularly the ice at the bottom of the coreholes. All it takes is time and money to recollect.

Inexcusable! System checks, eyes on monitoring and backup must be in place. To claim a perfect storm of failures is to show incompetent planning and poor execution of preventive protocols. Somebody didn’t do their FMEA!!!!

Database corruption??? … Red flag here … Seems this is a ‘it’s nobody’s fault excuse’ since the consequences would likely result in someone getting terminated.

Alternate scenario. If somebody did something stupid like use their own credentials when setting up the system and said person is no longer with the university and said persons credentials were disabled, well … The system would no longer have valid credentials to communicate with the database. I have seen this scenario played out a million times. The person either doesn’t understand or decided’s they will create a service account later but never gets around to it. If the said person’s supervisor is still around it may not be career enhancing to be responsible for loosing such a valuable asset.

Bryan A, those old records have been sufficiently disproved by the latest methods of consensus that people could not detect or record the proper temperatures before computers and satellites were in place.

Their explanation is wrong and shows no understanding of how these systems work. The most likely cause is the tripping of the high pressure safeties as stated ( for any number of reasons), with the result being the constant input of heat from the evaporator motors which may run continuously even though the compressors are not. Still some questions I would have about that. 25 years as a journeyman refrigeration tech , system designer and even salesman (under duress). With a well insulated room, even lighting can drive up temps quite quickly and to pretty high temps. Lots of big evap fans running? – X 10.

So what’s the problem – its just the same as Kriging, averaging or hommorogerising the data.
Saves a muppet from getting it wrong in Excel. and I’ve always said that an analog computer will gives better answers than any digital one.
Climate Science finally moves on

What’s the medical concept? When you hear hoofbeats, don’t go looking for zebras? Most likely garden variety incompetence. Most mechanical engineers know next to nothing about refrigeration, most University purchasing departments know nothing about hiring competent firms and have to take the lowest bidder with at least superficial qualification. I also worked a lot with controls companies, ALL of which were generally lacking in knowledge of what they were controlling and how those components worked and worked together. I saw dozens of screw ups like this, just none so spectacular.

The next university department that shells out coin for a “top of the range” freezer will be the first. Gotta cover those “fees” for university “overhead” related to the program. Forget about top-of-the-line equipment.

No problem in line with normal procedure in climate ‘science ‘ they will just ‘model the problem away’ and of course they can always refer to the magic tress. Heck the science is so ‘settled ‘ there is hardly any need to collect ice cores now , as they ‘know ‘ the result then want .

Rachel Notley, Premier of Alberta, with her Green Agenda and buddies (likely left-wing so-called intellectuals @ U. of A.) might be asked to account for this inexcusable event. Are these data not as important as Dead Sea Scrolls (and the like) and what opprobrium would be visited on the so-called guardians of the Archive in *that* circumstance?

It was all down to the routine update of the software….it seems some of the coding had come from the Mann Hockey Stick data….and it, being corrupted, did likewise to the freezer control software.
Afterall, that makes inevitable sense….adding corrupt data, forcing warming!!!

I wish I could blame this on climate warming crowd, however I’ve been involved in a number of “university” projects, and the ignorance of your average professor even the most basic of quality control is amazing to me.

Actually, it was the use of solar panels in mid-winter, during the night that caused the failure. Her highness wants to shut down our coal fired plants in a few years, so maybe that reliable source of solar/wind will be much better…. /sarc….

MISSION CRITICAL FACILITIES
When “whoops” is not acceptable, its time to call in engineers skilled in Mission Critical Facility design and operation. e.g. FTCH

has applied our highly capable engineering staff to address the challenges of Mission Critical Facilities. These include data centers, high performance computing suites, operations centers, nanotechnology facilities, vivaria, and other research and scientific facilities. Research and computation are expensive and time consuming endeavors. Our systematic process for systems definition and design enables us to excel with these complicated and difficult facilities. We work to make sure that facility constraints do not limit mission success. We go well beyond the traditional belt and suspenders design approach and deliver the true value of a facility or system built to respond to an owner’s needs.

Mission-Critical Backup Redundancy SACOM hardware platforms can be configured for total
redundancy for belt and suspenders reliability. The system automatically detects a system fault and
seamlessly switches to the backup system. In addition, an alarm is immediately sent to the technical staff,
informing them of the fault, and what must be done to rectify the situation.

Type B Systematic Errors
Related are the problems of Type B Systematic Errors which can equally destroy the objective scientific basis of models and measurements.
Now how can we have NASA’s Independent Verification and Validation Program engaged to thoroughly vet all Global Climate Models to verify and validate that they comply with the purpose of providing objective data information to politicians and the public, free from Type B systemic error and political biases?

Gary,
That may well have been done. However, it is so difficult and expensive to obtain the cores that they are archived so that they can be used if a different question arises that the cores can shed light on. Also, as technology advances, more information can be gleaned from the archived cores as a check on the original results.

A very long way from ice cores I know but when farmers take soil samples for analysis across fields it is a requirement that those samples be taken at certain spaced and regular intervals in that field to achieve a reasonable level of accuracy in the final laboratory analysis of minerals, fertiliser levels and etc in that field.

As somebody who is quite ignorant about the statistics involved in the accuracy of the data from each of the one off and quite isolated ice cores that are taken from the various global deep ice deposits on the planet;

What is the true, real world statistical accuracy of the analysis and consequent data from each of these single one off in location and depth and therefore time, ice cores ?

How many similar in depth and etc ice cores from close and adjacent coring locations would be needed to statistically verify the validity of the data supposedly derived from each of these deep ice cores?

Why in fact whilst the coring equipment and living quarters and back up equipment are all in place for a major coring operation in admittedly very harsh conditions, a second set or preferably more of adjacent verification cores not drilled and archived to remove any doubts about the statistical validity of the data being collected from those cores?

Why isn’t there a policy of both collecting a grouping of cores to enable verification of any analysis and data from that grouping of cores plus a policy of locating and housing those precious cores in quite separate and distinct locations to counter episodes of the now quite regular and not at all unusual scientific incompetency such as we see described in this case?

Reminds me vaguely of a freezer in a university pharmacology department that was filled with the carcasses of experimental animals that had been injected with radioactive tracers.

When the freezer failed, the whole contents melted into a single stinking, radioactive soup. Then, when freezer function was restored, the soup froze into a solid block. The whole thing now was of course too heavy to be moved, so it just stayed there for years, with nobody having a frigging clue as to disposal. Happened a good while back, but might still be there …

Too much head pressure happens when someone overcharges the system with too much refrigerant. This causes the compressor to shut down within a short time of starting to avoid damage. It is usually caused by a technician used to working on systems that takes more refrigerant and assumes this system is the same. I would bet this system was redundant with more than one compressor but if someone overcharged each of them then they would all fail. Just my guess.

The Mount Logan core was the onlynone in western Canadian Arctic. It showed recent cooling past 200 years. McIntyre pointed out in 2013 that it was therefore left out of several recent Arctic hockey stick reconstructions, including PAGES2. Now it is melted. So doesn’t have to embarassingly be left out anymore. How convenient.

..It takes 48 hours for a well insulated grocery store, walk in freezer, to gain 4 degrees C after a CONTINUED power loss. Because it is insulated, it will not get above zero C for 4 days unless the store has a complete meltdown of it’s air conditioning system …….(meat manager in Daytona Beach, 10 years)..Going from -37 C to + 40 C is not possible unless intentional !

It doesn’t take a rocket scientist to install a low-temperature alarm on this freezer. Thousands or maybe millions of freezers in the world are protected in such a manner. Seems to be something fishy here.

It’s standard stupidity I think. A simple alarm is not sufficiently high tech to be trusted with something this important. They would have hired Honeywell or Johnson controls or some such to provide an alarm “system”. That means solid state sensors ( probably put in the wrong places), wired to a computer which monitors the sensors through some complex sampling software which involves time and temperature functions and perhaps disables during coil defrost cycles. This software signals alarm specifics to secondary software. A hundred places where this can go wrong. KISS!
Of course, if you make it simple, Honeywell and their brethren contractors won’t have a contract to maintain at margins exceeding 100%.

It’s a university, for goodness sake. As soon as the grad projects were finished they forgot about the ice cores. The university has much more important concerns like developing condos. Give them a break!

It is an expensive and embarrassing accident, but it is no tragedy. There is plenty of ice where that came from. It is not as if it had melted meanwhile and was irrecoverable. They should ask for a grant to re-drill that 12.5% with more modern equipment and better techniques and say that they are going to demonstrate that the ice is going to disappear so it better doesn’t happen again in the future.

I stand to be corrected but I watched an episode of Discovery Channel Canada about these coolers. They are brand new and the cores had just been transferred from a facility in eastern Canada, can’t remember where.

That Discovery Channel is very pro anthropological warming to almost fanatical. They treat Suzuki almost like a god.

Amazingly, a Discovery Channel crew are indirectly responsible for saving much of the ice in that cooler. Many cores had been moved from the doomed freezer to another one with better light for the Discovery Channel camera operator’s benefit.

Just my 2 cents worth but Duncan’s comment : “The dog ate my homework” is still tops! This is way to convenient. As in “lets lose the data and get funding for another grant to replace the cores”.
And sorry if this sounds petty but some of the stuff that the left does is “suspect” to say the least..

The simplest solution: create a storage facility in Antarctica. Cooling systems may be unnecessary… “the highest temperature ever recorded at the Amundsen–Scott South Pole Station was −12.3 °C (9.9 °F) on Christmas Day, 2011.” Scientists worried about global warming can go there to study the cores.

Mount Logan temperatures from do18 isotopes versus others in the North Hemisphere going back to the beginning of the last ice age. Logan on top. Note that 2002 temperatures are still in the Little Ice Age region, were higher around 1800 and much higher 9,000 years ago.

Ie. The important data is already published (although probably very hard to find).

Considering what those ice cores must have cost, failing to have a periodically tested temperature alarm system in place at their storage facility was like putting the contents of a jewelry store in a cardboard box on the sidewalk, and sealing it with Scotch Tape for security.

Here’s a discussion of freezer temperature monitors. There are many products out there, some of them very inexpensive. Many of them can directly send automatic text messages, emails & phone calls in the event of a failure.

This looks to me as what we can expect to see more and moe of, no one today seems to be able to take responsibility for what they are responsible for, they somehow think a computer and computer software will take the place of just plain hard work, weather it modeling flood in a basin, God forbid we go out and actually do measurement of the conditions. That to much work after allt hat what computer models are for. In this case it was a simple mater of have someone checking on the condition twice a day in the past a janitor did such work, he was conscious of the building he maintained and would know just in a simple walk through if something did not sound right but with the music blaring or that latest game going, on today janitor’s smartphone somehow that is lost.

Sometimes I think the entire climate change boondoggle is down to the ability of computers to provide sexy visuals to the point that the screen becomes the research, becomes the fact, becomes the theory, becomes the object of worship.

How can I disable the Google ads at the end of the article? Every time they reload they reset my browser so that I see the ads and not the content I was just trying to read! It makes it impossible to read the article from start to finish. It has glitched me three times even as I type this reply.

Yes many UofA professors are political left wing advocates to climate alarmism. The list is very long which includes Andrew Leach who was a big push to convince the new and naive government to implement the carbon tax. They know it’s there lunch ticket and the governments. The premier listens to them like they are God to the demise of the low and middle class. The NDP will be a one term government because of this foolish marxist agenda which most people are waking up to now. The only place where they will get seats is in “Redmondon”. A big socialist city in northern Alberta. The UofA has been a hub for this marxism for a long time. The only saving grace is the rural ridings that could oust this government but they are doing their damnedest to add and change riding boundaries where their support is. The silent majority will speak with their vote (just like Trump supporters) and if not the province will be doomed after 2 terms of NDP.

That was a new facility just open last October with the ice moved in on March 24.

“The ice core archive is the world’s largest collection of ice core samples from the Canadian Arctic. The collection represents more than 80,000 years of evidence of changes to climate in 1.4 kilometres of ice. The collection contains 12 ice cores that were drilled in five locations.
The collection of ice samples, drilled out of the depths of the Arctic over the past 40 years, was carefully transported from Ottawa to Edmonton in January.
They were shipped in a freezer container chilled to -30 C, equipped with a custom-built monitoring system.
The university had built a $4-million facility to keep the ice safely frozen. The ice cores remained in the freezer container until they were moved into the new facility on March 24.”

Another consideration: what happens if they redrill all the cores and they turn out to show different results to the first set. This might, if compacted ice is as unreliable an indicator of past atmosphere as some have suggested, prove to a lot more troublesome that our careless friends suppose. Any thoughts on that possibility from our expert contributors?

Wait. What? I thought these guys were the ultimate experts at reading thermometers, after generations of knuckle draggers who didn’t even know what time it was. How is it these genuises missed their own thermometer, when they possess the magical ability to read thermometers from over a century ago and half a planet away?

For permission, contact us. See the About>Contact menu under the header.

All rights reserved worldwide.

Some material from contributors may contain additional copyrights of their respective company or organization.

We use cookies to ensure that we give you the best experience on WUWT. If you continue to use this site we will assume that you are happy with it. This notice is required by recently enacted EU GDPR rules, and since WUWT is a globally read website, we need to keep the bureaucrats off our case!
Cookie Policy