Ocean prophets

These scientists can predict the direction an oil spill will take, or if salmon lice will infect a neighbouring fish farm.

April 22, 2010: It’s been two days since the disaster alarm has gone off in the Gulf of Mexico. News of the Deepwater Horizon oil rig explosion fills the television and newspaper reports. Huge amounts of oil are spewing into the ocean from the uncapped oil well. Experts are stunned and paralysed by the news.

On this spring day, SINTEF scientist Mark Reed, who has “marine environmental technology” as his trademark – is on the way out the door to a meeting. He is delayed by the shrill chattering of his phone, and has turned around on the doorstep. Bill Lehr from the American National Oceanic and Atmospheric Administration (NOAA), is on the line. The two have chatted jovially before about other joint projects. Now the situation is much more high pressure:

“Can you help us?” asks Lehr. “We need to find out what’s going on under the sea in the Gulf.”

“We can,” answers Reed.

“Can you start immediately?”

The simulation shows the release of salmon lice from Smøla and Frøya. The white spots include several fish farms, and simulate the lice that are released from the fish farms in this area. Other facilities located between the islands of Hitra and the mainland could be infected by the lice in two to four days.Ill: SINTEF

“No problem, we can handle it.”

Twenty-four hours after the call, the Norwe-gian researchers have started to download data about the currents and winds in the Gulf of Mexico, and the computer models have been cranked up, even though the scientists still know very little about the kind or the amount of oil that is being spilled into the Gulf.

The only model in the world

It’s been over a year since the accident.

BP shares have long since risen on the US Stock Exchange, the deep water drilling ban in the Gulf has been lifted, and the explosion in the Gulf of Mexico has gone down in history, when Mark Reed welcomes me with a warm handshake.

“We could follow the oil the whole way and keep track of every kilo.”

Senior Research Scientist Mark Reed

Inside the office, he shows me a little of the huge volume of email exchanges that went on between Norwegian researchers and US auth-orities in April last year.

“One challenge we face is to estimate the amount of oil flowing out from the borehole,” says an email from Bill Lehr dated 26 April – a week after the explosion.

“It was about this time when journalists and officials in the United States began to understand the seriousness and extent of the disaster,” said Reed. “We now know that the discharge of oil was less than predicted. You don’t see so much oil on the beaches – but at the same time, we believe it can hide in the ground.”

“But why did they contact SINTEF?” I ask.“Couldn’t BP use experts who were closer?”

“We were contacted by BP and NOAA because we have the only model in the world that can tell us both what is happening down in the ocean and what is happening on the surface,” says Reed. His gaze is quiet for a few seconds before shifting over to look the turquoise colour that illuminates his computer screen:

“Here you can see: The surface water in the Gulf of Mexico is at 30 oC – the bottom water is at 4 degrees. When 60 oC oil bubbled out of the seabed on April 20, it took more than two hours before anything came to the surface. Why?” He answers himself:

“Because the oil was quickly diluted by water, the temperature dropped and the density was greater.”

Reed and his colleagues calculated that the buoyancy of the oil decreased at about 1100 metres up. From there, only oil and gas bubbles continued to the surface. The gas dissolved in water, and did not surface, and because of turbulent weather, the oil droplets were broken down. The largest drops, about 1 cm in diameter, came to the surface quickly. The littler ones had little buoyancy and never came to the surface. The oil that did surface showed up two kilometres from the source.

“We could see all this with our computer model. We could follow the oil the whole way and keep track of every kilo. That’s the real answer,” says Reed.

The US government also knew that the Norwegian community had a great deal of experience from previous work: From oil spills outside of Kuwait after the Gulf War, from an oil leak at the Statfjord field in 2007, and from two shipwrecks on the Norwegian coast, the Server at Fedje and the Full City, outside of Langesund.

On April 20, 2011, oil poured from a damaged well under the Deepwater Horizon oil rig in the Gulf of Mexico. This is what the hot oil flow looked like in SINTEF’s computer model.Ill: SINTEF

Creating a virtual ocean

One floor below Mark Reed and his colleagues, Dag Slagstad is bent over his keyboard. Slagstad works with SINTEF Fisheries and Aquaculture, and predicts the future too. But where Mark and his group have focused on oil spills, produced water discharges and predicting the risk related to offshore drilling, Dag’s group concentrates on ocean currents.

“So we can ‘drop’ different things in the ocean, and determine where they end up – anything from sediment and pollutants to biomass and salmon lice,” he says, chuckling.

For many years Slagstad and his team of 5-6 researchers at SINTEF Fisheries and Aquaculture have used computational models to make predictions about climate effects and ecosystems. Based on atmospheric data from the Max Planck Institute in Germany, they can predict that pH values are dropping in the Barents Sea, for example. As a result of more CO2 being dissolved in the ocean, some types of shell-bearing marine animals will have a tough time ahead.

“But as we get more high-resolution models and can zero in more closely on individual locations, we have had some interest from other parties – including the aquaculture industry,” says Slagstad.

Will the threat float by?

The researchers are now focused on the Norwegian coast, from Stadt all the way up to Troms, and are at work on an extensive project for coastal counties. Slagstad shows on his computer how coastal areas are divided into grids, and that the model areas can vary from a close-up of only a few hundred metres – to an overview that covers an area of twenty kilometres.

“What does the aquaculture industry want to know?”

“How salmon lice and viruses spread. Will currents and wind, for example, cause the lice from a neighbouring fish farm to harm my installation?”

Slagstad picks up a colourful map of the Trøndelag coast. Here, fish farms around the islands of Hitra and Frøya have been plotted on the map with large white dots. In a simulation, one of the farms is contaminated, shown by a red dot, and on the screen, we see how the colour spreads with ocean currents. Suddenly a fish farm to the east is “invaded”, and it does not look good for fish farm number two, right across the sound.

Giant computer programs

It is not so easy to understand what computer models actually are, and what happens when researchers feed numbers and measurement data on weather and wind into their computers. But the two researchers demystify the whole process – they see computer models as nothing more than giant computer programs that can show ocean circulation.

“When I arrived at SINTEF in 1992 I started with software I had with me from the US,” Reed says. “As time went on, we added various mod-ules, and our field of work accelerated. Today we have twelve people who work with this. One of the modules focuses on oil blowouts in deep water. Another simulates the response from mechanical oil recovery and dispersion by means of chemicals. The more measurements and numbers we feed in, the better the results are that come out.”

“Yes – measurements are extremely important,” Slagstad adds. “Making forecasts – calculating will happen in the future — must be constantly corrected with new measurement data. In this way, our models are continually adjusted to reflect reality.”

When disaster struck in the Gulf of Mexico, SINTEF was contacted by the oil company BP. The US Government estimates that approx. 5 million barrels of oil were discharged from the deep sea floor.”Photo: AP

Measurement data are key

And there-in lies the challenge. Some areas can provide good measurement data. Others locations have astonishingly little information – like the Barents Sea. How can we say anything about biomass or oil spills in a giant ocean like this?

For some years, the Norwegian research and industry communities believed that wireless sensor networks could solve the problem of inadequate monitoring of the environment and marine resources in the North. But at the same time, they concluded that the goal had to be to cover only limited areas, where the sensor network could be deployed to measure specific events.

SINTEF is continuing to build its expertise in this area. In a group project called Ocean Space Surveillance (OSS), the two internal modelling communities have joined forces with ICT researchers to develop small, inexpensive sensors that should be able to make these kinds of measurements at some time in the future.

“The Norwegian Meteorological Institute has a model that predicts ocean currents and waves 72 hours ahead. Here, we can reach a resolution of 4 km, and soon we will be able to get down to 800 m. The goal is to use the weather warnings to simulate the movement of currents with even higher resolution if special situations occur in connection with events such as a jellyfish invasion, oil spills or toxic algal blooms,” says Slagstad.

New projects and big sales

Research in the field of predicting the future moves at a lightning fast pace. Recently, the groups where Reed and Slagstad work were awarded a new EU project in Paris. With colleagues in Tromsø, who have expertise in biological modelling, the Trondheim groups will use their models to make predictions about the Arctic of the future, when it is warmer, and how that will affect transportation systems and the arctic environment.

In another project (called Symbioses), which will run until 2016, a number of oil companies and research institutions as well as SINTEF and its two computational modelling teams will study the effects of oil spills in the Barents Sea.

In addition, licences for the Norwegian model simulator called OSCAR are being sold the world over. Oil companies cannot just sit and wait for an accident to happen. They must have preparedness plans ready, and must conduct monthly exercises to keep themselves in readiness.

“After the accident in the Gulf, sales have flourished – especially in England. Our buyers come from both pure companies as well as consulting firms hired to help oil companies with planning. Well-known oil companies, such as Statoil, Eni, Total and BP, which are working in South America and Africa, have also obtained permission to use our program.”

“But aren’t you afraid of becoming unemployed?”

Reed laughs. “Hardly. First, we have to constantly work to make the model better. Second, we are sitting on skills they have to have. The people who buy our models use them once a month, perhaps. We ‘live’ with the models almost daily, and are always able to provide useful decision support when an accident happens.”

Activity in the Arctic is on the increase, but how safe is it to operate there?

MORE NORWEGIAN SCITECH NEWS

LOADING CONTENT

Privacy Policy

The Privacy Statement is about how this website collects and uses visitor information. The statement contains information that you are entitled to when collecting information from our website, and general information about how we treat personal data.The legal owner of the website is the processing officer for the processing of personal data. It is voluntary for those who visit the web sites to provide personal information regarding services such as receiving newsletters and using the sharing and tip services. The treatment basis is the consent of the individual, unless otherwise specified.

1. Web analytics and cookies (cookies)

As an important part of the effort to create a user-friendly website, we look at the user pattern of those who visit the site. To analyze the information, we use the Google Analytics analysis tool.Google Analytics uses cookies (small text files that the site stores on the user's computer), which registers the users' IP address and provides information about the individual user's online movements. Examples of what the statistics give us answers to are; how many people visit different pages, how long the visit lasts, what websites users come from and what browsers are used. None of the cookies allow us to link information about your use of the site to you as an individual.The information collected by Google Analytics is stored on Google servers in the U.S.. The information received is subject to the Google Privacy Policy.An IP address is defined as a personal information because it can be traced back to a particular hardware and thus to an individual. We use Google Analytics's tracking code to anonymize the IP address before the information is stored and processed by Google. Thus, the stored IP address can not be used to identify the individual user.

2. Search

If the webpage has search function, it stores information about what keywords users use in Google Analytics. The purpose of the storage is to improve our information service. The search usage pattern is stored in aggregate form. Only the keyword is saved and they can not be linked to other information about the users, such as the IP addresses.

3. Share / Tips service

The "Share with others" feature can be used to forward links to the site by email, or to share the content of social networking. Tips for tips are not logged with us, but only used to add the tips to the community. However, we can not guarantee that the online community does not log this information. All such services should therefore be used wisely. If you use the email feature, we only use the provided email addresses to resend the message without any form of storage.

4. Newsletter

The website can send out newsletters by email if you have registered to receive this. In order for us to be able to send e-mail, you must register an e-mail address. Mailchimp is the data processor for the newsletter. The e-mail address is stored in a separate database, not shared with others and deleted when you unsubscribe. The e-mail address will also be deleted if we receive feedback that it is not active.

5. Registration, form

The website may have a form for registration, contact form or other form. These forms are available to the public to perform the tasks they are supposed to do.Registration form is for visitors to sign up or register.Contact form is for visitors to easily send a message to the website's contact person.We ask for the name of the sender and contact information for this. Personal information we receive is not used for purposes other than responding to the inquiry.The form is sent as email via Mailgun as a third party solution. The entire submission will be stored at Mailgun for 24 hours. Between 24 hours and 30 days, only mailheader is stored before the submission is deleted after 30 days. The reason for this storage is to confirm whether emails are sent from the website and forwarded to the correct recipient.Once the email is received by the recipient, it is up to the recipient to determine the data processing needs of the email.

6. Page and service functionality

Cookies are used in the operation and presentation of data from websites. Such cookies may contain language code information for languages ​​selected by the user. There may be cookies with information supporting the load balancing of the system, ensuring all users the best possible experience. For services that require login or search, cookies can be used to ensure that the service presents data to the right recipient.