Category: Biodiversity

A key part of our approach to CitySounds was involving the most relevant stakeholders from the outset. This included

soliciting their input to the initial experiment proposal;

inviting a broad spectrum of people onto our management team;

inviting an even wider group to our initial Co-design Workshop, in order to plan how to create engagement and impact around the project.

While it is not always easy to get a community gardener, a senior data technologist, a Council biodiversity officer and a sound designer talking to each other, it worked amazingly well in this experiment. The ‘kick-off’ and ‘touch-down’ (closing) Management Meetings as well as the Co-design Workshop provoked insightful, valuable and engaging discussions where knowledge, learning and ideas were shared across research disciplines and city sectors. We formed new relationships and are continuing to build on them. One outcome was that a conversation in the Co-design Workshop led to a team of people submitting a proposal to Nesta extend the project to three more parks in Edinburgh, and we are very happy to say that project is now going ahead.

The more challenging part of the project was developing relationships with people who live in and around the Meadows and attracting them to our workshops. We had a very high response to the talk from Chris Watson and the Sonikebana event because it was publicised through a partner organisation — New Media Scotland — that had a large, established, active and interested community. Reaching out to community groups, Community Councils, and local residents requires working through multiple channels and building up relationships over time. While we worked as much as possible through the contacts and networks of the team members, it was challenging to attract community biodiversity enthusiasts and greenspace users to our workshops. Valuable insights from the final management team meeting were that (a) we should take our message out to community groups where they are already meeting, inform them about the project and build a relationship with them first, if we hope to get them to attend workshops; and (b) we will attract a lot more interest in the project once we have a larger volume of data from the devices in a form that can be easily shared.

This experiment also helped us to reflect on what we mean by ‘community’ and ‘citizen’ and how we engage with and reach people with new ideas and opportunities at the interface of technology and civic/city issues. We aimed to reach biodiversity enthusiasts with new monitoring technology, data and communication methods; technology enthusiasts with sound recording devices and biodiversity data; and sound art enthusiasts with audio and biodiversity data, but also people using or interacting with the Meadows who might have little or no experience of biodiversity monitoring, sound recording devices or biodiversity and audio data analysis and presentation. All of these people are in some way community members and citizens of Edinburgh, but the latter group was our ideal target and was, unsurprisingly, the most difficult to identify, reach out to, and draw in.

As we further develop the Edinburgh IoT network, we will be continuing our outreach activities and continuing to build relationships with people and groups interested in biodiversity monitoring. We will also continue collecting and using audio data to improve the ways that we interact with and value greenspace and the natural environment in the city. Finally, we will alo be looking at ways that we can connect with and support community initiatives that are already underway and have strong and active groups around them. One example would be the Fountainbridge Canalside Initiative ‘Living Wild’ project, which is developing a community plan for greenspace within a major city development.

In summary, CitySounds was an excellent opportunity to begin community engagement with Edinburgh’s new IoT initiative, which is being designed as a Research and Innovation Service for experimenters. We think that it is absolutely essential for citizens to be involved in experimenting with tools, services, data and urban development. It is part of the explicit mission of key partners in this project, including the Edinburgh Living Lab and Edinburgh Living Landscapes, as well as of the Scottish Government, to ensure that citizens are actively involved in shaping the way the city develops, including the ways that technologies and data are used to understand, inform and communicate about city decision-making

The CitySounds project held two workshops on 19 February 2018, with special guest Kate Jones from University College London. The ideas for the workshops were conceived at our co-design workshop earlier this year.

Two aims that we identified for the community workshops were a) to find out what people might want to learn about nature and biodiversity in the city through sound (as well as potentially other forms of environmental monitoring and data collection) and b) to demonstrate how and what we can learn through the initial sound recordings coming from the project’s Audio Capture Devices and perhaps teach some basic skills in audio data analysis.

Our first workshop took place in the afternoon at the University of Edinburgh Informatics Forum. We had special guest Kate Jones from University College London, who presented an excellent example of learning about nature in the city through sounds — the Nature-Smart Cities project. The project brings together environmental researchers and technologists to develop the world’s first end-to-end open source system for monitoring bats, to be deployed and tested in the Queen Elizabeth Olympic Park, east London.

Kate gave a fantastic presentation about the project, starting with the foundation of monitoring biodiversity. How might we track biodiversity in urban areas and understand its role in helping us to live safely, productively and healthily? She encouraged us to imagine the Biodiversity version of ‘Industry 4.0’ — how could cyber-physical systems, Internet of Things, networks, data-driven and adaptive decision-making machines be employed to support biodiversity conservation and help stop the rapid loss of biodiversity across the planet?

Kate Jones describes data processing pipeline for bat monitors

Kate and her team developed the Echo Box, which is essentially a Shazam for bats. It picks up the frequencies that bats communicate with and uses an algorithm to identify the call and provide an indication of which species has been heard. It then sends the information back to a central server and displays the information online at http://www.batslondon.com/. Fifteen Echo Boxes are installed on lamp posts around Queen Elizabeth Olympic Park and have been continuously monitoring bats for three months.Olympic Park Echo Box
While the original idea for the project came from Kate’s passion for biodiversity conservation, as other people found out about the publicly-available data, they generated their own ideas from it. A group of students built an arcade machine based on the data that has become a highlight at the visitor centre, while researchers added bat data to a 3D augmented reality visualisation of park. Another group devised small 3D-printed gnomes placed around the park that people could interact with via a chatbot to find out more about bats in the park.‘Memory Gnome’ from Olympic Park

We were all thoroughly inspired by the incredible amount of work that went into the project and the possibilities for learning about nature through sound while also engaging a wider population with biodiversity in the city.

Simon Chapple then shared the vision for the CitySounds project and encouraged us to begin imagining all the things that we could learn through audio data. Smart sensors can recognise what is taking place in the environment, and an array of multiple sensors can work out spatially where a sound comes from. In a particular area, audio data can allow us to identify species of birds present, bat activity, volume of traffic, car accidents and more – and a wide spectrum microphone can even allow us to record mice screaming at each other!

Following Simon, Jonathan Silvertown sparked our imaginations to the possibilities of all the different creatures that are roaming around our cities and that we could potentially learn about through IoT and other technologically-advanced forms of biodiversity monitoring. He showed us the National Biodiversity Network’s Atlas of Scotland, which keeps a record of all the creatures that have been recorded in a particular area. So, from where we were in the Informatics Forum in the centre of the city, this is what we might find:Screenshot of interactive map from NBN Atlas Scotland

We hope that the CitySounds project will provide not only a replicable method for learning about nature through sound but also a specific insight into the Edinburgh soundscape, from nature (weather, animals, birds, insects, bats), activities (walking, cycling, playing sport, festivities), transport (traffic, car horns, trains, planes), machines, electrical and electronic devices, breaking glass, noise pollution) through to the one o’clock gun, the many incidents of fireworks and the festivals large and small that take place around the city throughout the year.

Come and help us build some wooden tree boxes, which will be installed around the Meadows with microphones inside them for the CitySounds project. This will be a great chance to learn some basic woodwork skills, whilst also contributing to an exciting community project. No previous woodwork experience required!

So finally, we have been able to bring the full CitySounds Data Collector architecture online, and are now receiving encrypted audio data from our field test device, which is placed in a University private garden via our exernal WiFi AP mounted on the 5th floor of the Main Library.

The image above shows the 10-second audio samples (transferred via scp and separately encrypted with GnuPG) flowing through onto the CitySounds server from one of our Audio Capture Devices (ACDs). Never has a directory file listing looked so pretty!

Our Raspberry Pi based ACDs are also now fully time-synchronised through our local NTP server to ensure they work collectively and accurately to cover off each 60-second block of time. Once all six ACDs are deployed, they will each record a 10-second slice in sequence.

We are now on track to deploy to the trees in the Meadows in Edinburgh early next week: this will be a major accomplishment, especially given the additional extreme weather and strike issues we have been having to navigate the last couple of weeks.

Shifting gear

I am in the thick of developing the sound installation for this project which will reveal some of the concepts behind our work and show some of the sounds that will be captured by the Simon Chapple’s sensor network. I’ll explain more about that in another post soon. Meanwhile, I’m taking a break from thinking about gel loudspeakers, compass and heading settings on mobile phones in order to say a little about my experience working with Simon’s Rasberry Pi, wireless, 192kHz audio beacon prototype earlier this year.

Simon lent me his prototype in order for me to hear what’s going on in my garden in late January and to run some noise profiling tests. I was keen to see if the small mammals that must live in the garden are interested and active around our compost heap. I dutifully positioned the sensor box where I hoped I’d hear mice and other mammals fighting over leftover potato peelings but sadly — as far as I can tell at least — nothing of the sort: no fights or mating rituals at this time of year. The absence of activity is useful, since it suggests that there has been plentiful food for small mammals to find earlier in the year/day and they’re not risking the wind, rain, snow and something new in the garden to get a late night snack. However, a largely quiet night means that the few moments of sonic event are all the more interesting and easy to spot.

A word on the tools I’m using

I’ve been using SoX to generate spectrograms of the 10-second audio clips collected. It’s a good way to quickly inspect if there is something of interest to listen to. With over 9 hours of material though, it’s not interesting to listen to the whole night again. Instead, I first joined all of the files together using a single command line from SoX in the terminal window on OS X:

sox *.wav joined.wav

I then generated a spectrogram of that very long .wav file. However, the resolution of a 9 hour file needs to be massive to give any interesting detail. Instead, I decided to group the files into blocks of an hour and then rendered a 2500 pixel spectrogram of each 10-second burst. It’s very quick to then scroll down through the images until something interesting appears. Here’s the .sh script I used:

Something suspicious

From the rendered spectrograms, I can quickly see that there were some interesting things happening at a few points in the night and can zoom in further. For example this moment at 1:43 am looks interesting:

Something suspicious at 1:43 am

It sounds like this:

I suspect that this is a mouse, cat or rat. Anyone reading this got a suggestion as to what it might be?

As the night drew on, the expected happened and birds began to appear — the first really obvious call was collected at 6:23 am. It looks like this:

First bird call 6:23 am

And it sounds like this:

Noise

If you’re able to hear this audio clip, then you’ll be aware of the noise that is encoded into the file. One of our challenges going forwards is how to reduce and remove this. I’ve tried noise profiling and attempted to reduce the noise from the spectra, but this has affected the sounds we want to hear and interpret. Bascially by removing the noise, you also remove parts of the sound that are useful. I’m reflecting on this and think that there are ways to improve how electricity is distributed to the Rasberry Pi in Simon’s box from the battery and whether we need some USB cables with capacitors built in to stop the noise. However, noise reduction may not be as important to others as it is to me. My speciality is sound itself, in particular, how things sound, I want to get rid of as much unnecessary noise as possible so that when I’m noisy, it’s intentional. However, for an IoT project, the listener isn’t going to be a human but a computer. Computers will analyse the files, computers will detect if something interesting is happening and computers will attempt to work out what that interesting thing is based on things it’s been told to look for. It’s highly likely that the noise, which very quickly makes these sounds seem irritating and hard to engage with for a human, may well be perfectly fine for a computer to deal with. Let’s see.

Simon Chapple and I met with Peter Davidson, one of the City of Edinburgh Council’s Park Rangers, to look at the options for installing our Audio Capture Devices (ACDs) in trees across the Meadows. Although there was a fresh wind, we were fortunate that it was a clear, sunny day to carry out our survey.Simon and Peter sizing up a tree

To start off, Simon gave a brief introduction to his ‘bird box’ enclosures and electronic kit, and explained how they would be attached to the trees using bungee cords, plus a padlocked cable for security.

Screen grab showing 2 bars for the organicity WiFi Access PointWe then did a quick tour of parts of the Meadows where we could see that we were in range of the newly-installed WiFi Access Point, appropriately enough named ‘organicity’, The main challenge was to find trees with branches in the ‘goldilocks’ zone: high enough for the ACDs to be out of harm’s way, but not too high for us to change the battery if necessary. (No, we haven’t yet got the point where we can use solar panels or tap into the power source of lamp posts!) Another constraint is that we need to avoid trees which have been marked as possibly suffering from Dutch Elm Disease, though fortunately that doesn’t seem to be too prevalent on the Meadows.

Two views of the Community Garden supported by Greening our Street and FOMBL

We concluded with the happy feeling that there was a good number of trees that we could use when we are ready to launch the devices in public.

In a previous blogpost, we talked about how we were planning to organise a number of workshops as part of the CitySounds project. We’re now ready to launch the first one!

So please join us for our public workshops on how the Internet of Things and other new advances in technology can help us understand biodiversity and how the health of the urban greenspace contributes to the wellbeing of us all.

There will be two workshops, both of which will take place in the University of Edinburgh Informatics Forum on 19 February 2018. The workshops will present two projects — Nature-SmartCities in London and Edinburgh CitySounds — which are using the Internet of Things and bioacoustic monitoring to learn about biodiversity and nature in the urban landscape.

The first workshop is directed toward an academic and professional audience who are interested in research and application around the Internet of Things and data science in relation to biodiversity, health & wellbeing, and nature & greenspace in the city. It will take place 2:00pm–4:00pm.

The second workshop is a non-technical event, intended for anyone with a general interest in the connections between technology, data and biodiversity in the city. A key part of this workshop will be an interactive session in which we will generate and collect ideas and feedback about specific issues that are of interest to participants. We will also look at how we might use the Internet of Things to learn and communicate better about biodiversity in the city. The workshop will take place 5:30pm–7:45pm.

An earlier post described my initial steps in building an audio monitoring device, and over the last couple of weeks, I have worked on putting the electronics inside an enclosure that is both waterproof and will not be too obtrusive when installed in a tree. We refer to it as the “bird-box”. The box is made largely of 3mm plywood, with some thicker wood framing. It’s been stained and varnished to weatherproof it. The design enables easy separate access to change the battery without dislodging the Raspberry Pi Zero W processor and the Ultramic. On the inside, we use hermetically sealed plastic lunch boxes to hold the sensitive electronics, with sealed punch-throughs for the various connecting cables. It’s cheap and very effective.

Our next step was to carry out some field-testing of the device. We decided to do this in the private garden of a University of Edinburgh property, close enough to the Meadows to capture representative samples of sounds in the environment. I installed a temporary WiFi access point in the building to pick up the data from the prototype device in the garden, which is collected on a laptop also sited within the building.

The recording device placed outside for 72 hours in the garden

Here’s a small sample of what we recorded over the three days of wind, snow, rain and freezing temperatures. The unit performed well in these challenging conditions, including the 30,000 mAh power bank.

This audio sample is indicative of what kinds of things we can detect in the urban environment: an emergency siren in the background, a stonemason working on a nearby building, and a snatch of bird song. The spectrogram below illustrates the different frequency ranges at which the sounds occur, from 0kHz up to 20kHz.

The bottom pink line is ambient sound.

The faint wavey pink line above that is the siren.

The strong pink fence-like pattern above that is the sound of the stonemason tapping away.

Finally, the little pink burst (between 3kHz and 5kHz) just before the last two taps from the stonemason is the clearly-audible bird song.

Listen again whilst looking at the image and you can observe how the sounds interact with each other.

We are excited to see that the recording device, the WiFi router and the computer all seem to be working together well.

Pursuing our goal of collaborating with Edinburgh Living Landscapes and other partners to explore how soundscape data can support community engagement, education and citizen science and increase the value created by urban greenspace, we invited stakeholders and interested parties to an initial CitySounds Co-Design workshop on 9th January 2018.

It was a great event, full of ideas and enthusiasm. Here, we briefly mention the main topics of discussion.

Round table discussion

Exploring and understanding the data that will be captured

The six audio monitoring devices will each record 10-second samples in rotation, focusing on biodiversity in the Meadows. The devices will operate 24/7.

We are hoping that these will pick up birds, bats (which cannot be heard by the human ear), rain, traffic noise, etc. It will be interesting to see how many anthropogenetic sounds occur in the ultrasonic range.

We should be able to detect bird sounds within a 50–100m range and bats within a 30m range. (Interesting fact: Bats are loud! Their signals are typically over 100 decibels)

We are in the process of installing a WiFi access point on the 6th floor of the University Main Library, facing the Meadows.

Data will be directly transferred via WiFi to a server—so no data will be kept on the devices themselves.

It was pointed out that it will be important to make it as easy as possible for small biodiversity organisations to access the collected audio data, since often these have little or no resources for dealing with technical intricacies.

Community engagement actions in the project: who are we targeting and what do we want to achieve?

We are planning to organise at least three community engagement events during the course of the project:

First data literacy workshop (open to stakeholders)

Second data literacy workshop (open to interested groups and the public)

A final sonic art exhibition open to the public.

We spent the last section of the workshop discussing various ideas for these events.

Whiteboard capture of ideas for engagement events

The two data literacy workshops

These workshops will be an opportunity to communicate with the public about acoustic data and to engage their interest in data, IoT and urban greenspaces. We discussed:

What are we trying to achieve in the workshops?

What issues should the workshops address?

How can these apply in general to biodiversity monitoring?

How can they apply to the green network across the city that Edinburgh Living Landscapes is creating?

What is the target audience for the workshops? People already involved in biodiversity activities?

Co-designing in action

Measuring impact of biodiversity initiatives in the city

How can Edinburgh Living Landscape, FOMBL, the CEC Biodiversity team, and other interested partners use acoustic data to create evidence and evaluate the impact of their work? We are hoping to continue the monitoring after March 2018 (i.e., beyond the period of funding from OrganiCity) — having 12 months of data or more would be valuable to us and to our partners.

FOMBL/Greening Our Street:

Can the monitoring help identify ‘green tunnels’ through the city? This would be really valuable information for shaping future biodiversity initiatives.

City of Edinburgh Council:

Because it is time-consuming and expensive to collect biodiversity data, much of the information about sites across the city is out of date. It would be very useful if IoT technology could be used to get much more timely biodiversity data. Amongst other things, this would give evidence to support continued protection of those greenspaces.

The Sonic Art Exhibition

Martin Parker explains plan for sonic art exhibition.

We revisited plans for the end-of-project exhibition and event and considered whether to adapt or expand it. This event is intended to be both a response to the audio assets collected by project and simultaneously a way of engaging with the public. Martin Parker explained his original conception, where six speakers would each be controlled by a location-aware app on a phone, determining what, how and when sound comes out of the speaker. In addition, the speakers would be movable, and members of the audience could arrange and re-organise the soundscape within the physical exhibition space.

Ideas that we discussed included:

How can we build a biodiversity storytelling aspect to the sounds? Should we, for example, include information about bats as an accompaniment to the audio?

How will we represent ultrasonic sounds to the public?

Can we capture different times of day on speakers, so that people can hear sounds associated with the night, the morning etc.

Should we associate sounds from different parts of the Meadows with different parts of the room?

We are still working out the best processes and activities for our two data literacy workshops and the final sonic art exhibition, so watch out for further blog posts!