Interactive infographics are becoming an increasingly popular medium for presenting sophisticated research in an engaging way. IIGs help consumers make sense of large data sets by curating data, making it mutable, and bringing a degree of play to the process of parsing data. Univers Labs has a lot of experience building these; one of our most popular creations is an interactive infographic called Western Dance Music.

Interactive infographics are becoming an increasingly popular medium for presenting sophisticated research in an engaging way. IIGs help consumers make sense of large data sets by curating data, making it mutable, and bringing a degree of play to the process of parsing data. Univers Labs has a lot of experience building these; one of our most popular creations is an interactive infographic called Western Dance Music.

We are investigating ways to revitalise and refresh the IIG concept as a medium for delivering data and hooking viewers. IIGs frequently use technologies like Flash, or HTML5 Canvas, to produce high-performance, interactive 2D graphics, but, for our latest infographic, we wanted to push the envelope, and deliver something really cutting-edge.

The concept design called for fluid, shifting shapes, resembling a 3D area chart. In order to implement this concept design, we chose to use WebGL, implemented via three.js, stepping out of our comfort zone in order to pursue an awesome design. The idea was to allow viewers to control the visualisation by traversing one axis of the chart, with the chart morphing and undulating in response to the user’s control.

Creating a limited color shader

Part of the concept design’s charm was its blocky patterning and discreet colour palette. In order to reproduce this effect, we had to make a shader that drew with a limited palette, but also integrated well into the lighting system offered by three.js. We constructed a bare minimum Lambert shader from the default shader chunks, and used the light intensity value to index into a palette texture. To create the blocky effect, we used the THREE.NearestFilter texture sampling mode, instead of linear texture sampling, and also sampled a noise texture to translate the y-coordinate, giving a floating effect. This would be fairly intensive to do per-vertex on the CPU.

A basic noise texture was generated using GIMP and mirrored to make it repeatable:

We also created several carefully designed colour palettes. Ideally, these could be procedurally generated to allow for an arbitrary number of graphs rather than a hardcoded quantity:

Making a fluid graph

To produce the flowing fluid effect, we modelling the y-coordinate of each vertex as a spring, using the equation F = -kx - bV, where k = spring constant, x = displacement, b = dampening, and v = relative velocity. The final code looked like this:

Fake depth of field and transparency

At first, we used a DOF shader to attempt to highlight the selected graph, but this produced a lot of artifacts. It was also difficult to single out any specific graph due to the angle of the camera:

Instead, we used multi-pass rendering, drawing the blurred background, the highlighted graph, and the blurred foreground in order. The result of each pass was drawn straight to the screen over the top of the last pass, rather than using three.js' effect composer to combine all the passes. This saves GPU cycles, and also allows the built-in anti-aliasing in Google Chrome to continue working.

Finally...

Interactive infographics are becoming an increasingly popular medium for presenting sophisticated research in an engaging way. IIGs help consumers make sense of large data sets by curating data, making it mutable, and bringing a degree of play to the process of parsing data. Univers Labs has a lot of experience building these; one of our most popular creations is an interactive infographic called Western Dance Music.

Interactive infographics are becoming an increasingly popular medium for presenting sophisticated research in an engaging way. IIGs help consumers make sense of large data sets by curating data, making it mutable, and bringing a degree of play to the process of parsing data. Univers Labs has a lot of experience building these; one of our most popular creations is an interactive infographic called Western Dance Music.

We are investigating ways to revitalise and refresh the IIG concept as a medium for delivering data and hooking viewers. IIGs frequently use technologies like Flash, or HTML5 Canvas, to produce high-performance, interactive 2D graphics, but, for our latest infographic, we wanted to push the envelope, and deliver something really cutting-edge.

The concept design called for fluid, shifting shapes, resembling a 3D area chart. In order to implement this concept design, we chose to use WebGL, implemented via three.js, stepping out of our comfort zone in order to pursue an awesome design. The idea was to allow viewers to control the visualisation by traversing one axis of the chart, with the chart morphing and undulating in response to the user’s control.

Creating a limited color shader

Part of the concept design’s charm was its blocky patterning and discreet colour palette. In order to reproduce this effect, we had to make a shader that drew with a limited palette, but also integrated well into the lighting system offered by three.js. We constructed a bare minimum Lambert shader from the default shader chunks, and used the light intensity value to index into a palette texture. To create the blocky effect, we used the THREE.NearestFilter texture sampling mode, instead of linear texture sampling, and also sampled a noise texture to translate the y-coordinate, giving a floating effect. This would be fairly intensive to do per-vertex on the CPU.

A basic noise texture was generated using GIMP and mirrored to make it repeatable:

We also created several carefully designed colour palettes. Ideally, these could be procedurally generated to allow for an arbitrary number of graphs rather than a hardcoded quantity:

Making a fluid graph

To produce the flowing fluid effect, we modelling the y-coordinate of each vertex as a spring, using the equation F = -kx - bV, where k = spring constant, x = displacement, b = dampening, and v = relative velocity. The final code looked like this:

Fake depth of field and transparency

At first, we used a DOF shader to attempt to highlight the selected graph, but this produced a lot of artifacts. It was also difficult to single out any specific graph due to the angle of the camera:

Instead, we used multi-pass rendering, drawing the blurred background, the highlighted graph, and the blurred foreground in order. The result of each pass was drawn straight to the screen over the top of the last pass, rather than using three.js' effect composer to combine all the passes. This saves GPU cycles, and also allows the built-in anti-aliasing in Google Chrome to continue working.

This is not a new argument. It’s like which is better, Karate or Kung Fu? There is no real right answer. The Guardian ran a very good article on this subject. One tester said that for him the listening experience was best undertaken when he was out walking and as such perfect quality was impossible.

Perhaps I should have stopped here and saved some money.

Obviously spending thousands of pounds on new speakers, pre-amps, amps, things with valves and rare room temperature superconducting cabling is not the right approach to take here. I wanted to have a fun. I wanted an achievable way of experimenting with and learning about the quality of different audio formats and hardware with the aim of figuring out for myself which camp was right, the good quality mp3’s are fine vs the lossless audiophile brigade.

The approach was simple and involved assessing the chain or pipeline through which the audio would move from the file itself, to my ears:-

Gain access to examples of the highest quality (e.g. the 24bit flacs available in some music retailers).

Build or buy a basic set of hardware which is known to handle the audio data correctly and not degrade it in any way, getting it from the CPU to my ear drums thus allowing me to decide for myself if I am missing out on anything or not.

Buying a new stereo was not an option here. I already have a perfectly good one. I got the cd player repaired and it’s just fine. Furthermore, buying ones which claimed to be marketed to the audiophiles seems to involve adding many many zeros on to the price. I needed a different approach. With this in mind, I brought myself a Raspberry Pi. At £25.00 for the core of the system, I felt I was off to a good start. Add £4.75 for a power supply and WiFi Adapter for £7.37. Total so far £37.13.

The DAC

I knew that the Pi could handle all of the various audio files right from the command line. FLACS, Oggs, MP3's WAVS. The initial stage of this project required me to do some research into how I could bypass the Pi's not very good built in Digital to Analogue Converter.

According to Wikipedia again; "A DAC converts an abstract finite-precision number (usually a fixed-point binary number) into a physical quantity (e.g., a voltage or a pressure)." and "Most modern audio signals are stored in digital form (for example MP3s and CDs) and in order to be heard through speakers they must be converted into an analog signal. DACs are therefore found in CD players, digital music players, and PC sound cards.Specialist standalone DACs can also be found in high-end hi-fi systems."

So, I had to source a DAC. It had to be:-

Portable.

Good Quality.

Not to expensive.

Supported by the Pi.

Able to be powered by the Pi.

A stretch goal was being able to use it on my phone (A Samsung Galaxy S4).

What Hifi have a good review of some of the contenders which I investigated.

In the end I opted for a different product; The KunLun e18. A freind at work (in the audiophile camp) had the e17 and frequently sang its praises. The device met all my criteria. There were many nerdy charts and diagrams of the supposed outputs on the site which I freely admit to not understanding. However the key line I cared about was:-

"USB Sample Rate Support 32/44.1/48/96KHz @ 16/24Bit."

This means that it can play at the highest bit depth (think range of volumes, not loudest and quietest, but the range to which the changes in amplitude can be recorded and played back).

The e18 was A bargain at £118.00 ;-)

The device itself was promptly delivered, nicely packaged with a lot of cables (some funky straps to job it to your phone), little rubber mounts and a ton of other things. This product worked with my phone and everything else. 10/10.

Setting Up the PI

The next task was to configure the Pi itself. In short I wanted the following:-

A debian based Linux flavour.

Something I could operate remotely without having to physically connect to the Pi.

Ideally something needlessly odd, such as using SSH to control it. Just so I can say that I did and that my stereo is odder than your stereo.

My Linux skills are basic. I can do some things but I am not espcailly advanced. With this in mind, I visited the Raspberry Pi site and found the downloads section. I recomend following their instructions for Noobs if you are new to this. You download the image and write it to an SD card (the instructions for this worked just fine). When you boot, the installation systems invites you to select one of several version of Linux for the Pi. I chose the most common one, a Debian derivative called Raspbian. Noobs downloaded it and set everything up. It was easy and I was up and running on Linux!

After configuring the WiFi device, I needed to install the following packages using apt-get.

MPD - This music playing deamon. According to Wikipedia; It plays audio files, organizes playlists and maintains a music database. In order to interact with it, a separate client is needed.

For some further light reading. I am a late commer to the game of using a Pi for audio. I would recommend visiting Raspberry Fi. They offer a Linux version ready compiled to support all of necessary things you need for this type of project out of the box. In the end I chose to stick with the standard distro though. I could also move onto this later.

The setup this far was using the standard sound jack built into the Pi. In order to complete the link I needed to bypass this and send the digital audio out via the USB cable to be handled by the e18 DAC. I was dreading this part and suspected that it would fail to work. I was relatively surprised though and was able to find the right configuration options. The DAC now handled the audio and played it. Technically we were finished.

The Important Part: Making it Look Bling

The whole setup looked pretty rubbish though. The Pi was just sat out by itself and the tension from the cables caused the Pi to move all over the place. I looked into a lot of third party cases which were available. They seemed ok and would have done the job, but I wanted something a bit more interesting. I wanted crazy mad stereo in minature.

Whilst trying to clear out my grandparent's garage of the hoards of scientific equipment, the lathe, the masses of tools, dead spider, the living spider, I came upon one of many boxes stuffed with scap metal. In this case aluminium. The idea hit me. Lets mount the Pi on an oversised block of metal. That would look just awesome.

Finding a suitable lump of metal.

Much head scratching later (I have no experience with this kind of work) I had managed to drill four holes and tap three of them. (There were four holes on the other side too with a broken off tap sticking out of, but we won't speak any further about that).

I mounted the Pi and gazed at my creation. It was ok but frankly looked pretty scruffy. I wanted to smarten the block up and also maybe make a recess for the DAC. A quick foray to an enginneers located next door to the office named Oxford Precision Components revealed some very bemused but very helpful engineers. I was quoted £110.00 to mill a recess into the block for the DAC, polish everything, chamfer the edges and get the snapped off Tap out from the underside. For an extra £30.00 they coul anodise it as well. This was pushing the budget up somewhat but the momentum was to great stop now so I handed over the block, some grubby twenty pound notes and held my breath.

At the engineers!

The work was superb. The anodising will keep the shine on the block and the visual impact of the blcok is really stunning. Silly? Absolutely, but great fun.

Mounting the Pi

Back Home!

A trip to maplin got some rubber feet for the base of the block for £4.95 and some rubber washers for the Pi for about £3.95. Certainly this has added up somewhat but no complaints so far.

Can you Even Tell?

Having reached this point I should say that I do not really care to much about the original objectives. This is not supposed to sound flippant, but the purpose of all of this was to do a fun technical project from which I could gain the following:-

Technical Knowledge from people at work.

Get my first Raspberry Pi and do something practical with it.

Experiment with Audio.

So far there has been little actual listening. Was the audio quality any good?

I could start posting detailed assessments of the experiences I had with the 24bit studio master of Peter Gabriel's album, SO. I could have purchased the same album on a cd and brought the MP3 download. Following this I could then try encoding on iTunes and then as Ogg Vorbis and listen to Don't Give Up at least 700 times. To be honest though I have better things to do. I can say that the quality is really good and I like my new stereo. I like being able to play any file format (good luck with that on iTunes) on a very simple device. I can SSH (with Juice SSH ) into the Pi from my phone (there is an app called MPD Droid that is easier to use but would not be the same).

There is this site called mp3ornot that plays you audio files and quizzes you to see if you can tell the difference in their bit rate. I got most of them right but was not perfect though.

A colleague also waded into the fray and wrote a small program which quizzed the listener on bit depths. Playing various versions of the same file at different bit depths. As with the bit rate test, I got some of these correct, but was by no means perfect.

In conclusion. You can definitely use a Raspberry Pi as a stereo. It's not the simplest setup and you have to think "right, I will now go and play music." This was good for making a disciplined mindset. I suspect though that I will end up peering in the window of the stero store in the near future. Perhaps my setup was simply to cheap and I sat down infront of a stereo costing thousands of pounds I will realise the error of my ways. Personally I am thinking 320kps MP3's and good quality headphones are just fine.

Can you tell the difference between 24bit audio and an MP3? Are the audiophiles complete fools for waisting their money on high end stereos when a standard bit of kit is just fine? Who cares any way? Can all this be solved with Linux and Raspberry Pi? Damn right it can!

The nerd within has been seeking a new obsession. Something to get thoroughly agoraphobic about. The origins of this new interest are a little murky but most likely have their roots in the “Obsessed with Linux” years of my life when I fussed over encoding my music in .ogg format and ranted to people about the perils of mp3. I thought it was terribly important.

Time moved on and I recently realised that my audio setup at home was pretty ramshackle. My cd player was broken, files where on different devices and people kept ranting on about quality. Some said it did not matter, mp3 was more than adequate. Others were stridently advocating the need for lossless formats, high quality studio masters and specialised DAC’s (Digital to Analogue Converters) if playing straight from your computer or phone. Who was right and what should be done?

This is not a new argument. It’s like which is better, Karate or Kung Fu? There is no real right answer. The Guardian ran a very good article on this subject. One tester said that for him the listening experience was best undertaken when he was out walking and as such perfect quality was impossible.

Perhaps I should have stopped here and saved some money.

Obviously spending thousands of pounds on new speakers, pre-amps, amps, things with valves and rare room temperature superconducting cabling is not the right approach to take here. I wanted to have a fun. I wanted an achievable way of experimenting with and learning about the quality of different audio formats and hardware with the aim of figuring out for myself which camp was right, the good quality mp3’s are fine vs the lossless audiophile brigade.

The approach was simple and involved assessing the chain or pipeline through which the audio would move from the file itself, to my ears:-

Gain access to examples of the highest quality (e.g. the 24bit flacs available in some music retailers).

Build or buy a basic set of hardware which is known to handle the audio data correctly and not degrade it in any way, getting it from the CPU to my ear drums thus allowing me to decide for myself if I am missing out on anything or not.

Buying a new stereo was not an option here. I already have a perfectly good one. I got the cd player repaired and it’s just fine. Furthermore, buying ones which claimed to be marketed to the audiophiles seems to involve adding many many zeros on to the price. I needed a different approach. With this in mind, I brought myself a Raspberry Pi. At £25.00 for the core of the system, I felt I was off to a good start. Add £4.75 for a power supply and WiFi Adapter for £7.37. Total so far £37.13.

The DAC

I knew that the Pi could handle all of the various audio files right from the command line. FLACS, Oggs, MP3's WAVS. The initial stage of this project required me to do some research into how I could bypass the Pi's not very good built in Digital to Analogue Converter.

According to Wikipedia again; "A DAC converts an abstract finite-precision number (usually a fixed-point binary number) into a physical quantity (e.g., a voltage or a pressure)." and "Most modern audio signals are stored in digital form (for example MP3s and CDs) and in order to be heard through speakers they must be converted into an analog signal. DACs are therefore found in CD players, digital music players, and PC sound cards.Specialist standalone DACs can also be found in high-end hi-fi systems."

So, I had to source a DAC. It had to be:-

Portable.

Good Quality.

Not to expensive.

Supported by the Pi.

Able to be powered by the Pi.

A stretch goal was being able to use it on my phone (A Samsung Galaxy S4).

What Hifi have a good review of some of the contenders which I investigated.

In the end I opted for a different product; The KunLun e18. A freind at work (in the audiophile camp) had the e17 and frequently sang its praises. The device met all my criteria. There were many nerdy charts and diagrams of the supposed outputs on the site which I freely admit to not understanding. However the key line I cared about was:-

"USB Sample Rate Support 32/44.1/48/96KHz @ 16/24Bit."

This means that it can play at the highest bit depth (think range of volumes, not loudest and quietest, but the range to which the changes in amplitude can be recorded and played back).

The e18 was A bargain at £118.00 ;-)

The device itself was promptly delivered, nicely packaged with a lot of cables (some funky straps to job it to your phone), little rubber mounts and a ton of other things. This product worked with my phone and everything else. 10/10.

Setting Up the PI

The next task was to configure the Pi itself. In short I wanted the following:-

A debian based Linux flavour.

Something I could operate remotely without having to physically connect to the Pi.

Ideally something needlessly odd, such as using SSH to control it. Just so I can say that I did and that my stereo is odder than your stereo.

My Linux skills are basic. I can do some things but I am not espcailly advanced. With this in mind, I visited the Raspberry Pi site and found the downloads section. I recomend following their instructions for Noobs if you are new to this. You download the image and write it to an SD card (the instructions for this worked just fine). When you boot, the installation systems invites you to select one of several version of Linux for the Pi. I chose the most common one, a Debian derivative called Raspbian. Noobs downloaded it and set everything up. It was easy and I was up and running on Linux!

After configuring the WiFi device, I needed to install the following packages using apt-get.

MPD - This music playing deamon. According to Wikipedia; It plays audio files, organizes playlists and maintains a music database. In order to interact with it, a separate client is needed.

For some further light reading. I am a late commer to the game of using a Pi for audio. I would recommend visiting Raspberry Fi. They offer a Linux version ready compiled to support all of necessary things you need for this type of project out of the box. In the end I chose to stick with the standard distro though. I could also move onto this later.

The setup this far was using the standard sound jack built into the Pi. In order to complete the link I needed to bypass this and send the digital audio out via the USB cable to be handled by the e18 DAC. I was dreading this part and suspected that it would fail to work. I was relatively surprised though and was able to find the right configuration options. The DAC now handled the audio and played it. Technically we were finished.

The Important Part: Making it Look Bling

The whole setup looked pretty rubbish though. The Pi was just sat out by itself and the tension from the cables caused the Pi to move all over the place. I looked into a lot of third party cases which were available. They seemed ok and would have done the job, but I wanted something a bit more interesting. I wanted crazy mad stereo in minature.

Whilst trying to clear out my grandparent's garage of the hoards of scientific equipment, the lathe, the masses of tools, dead spider, the living spider, I came upon one of many boxes stuffed with scap metal. In this case aluminium. The idea hit me. Lets mount the Pi on an oversised block of metal. That would look just awesome.

Finding a suitable lump of metal.

Much head scratching later (I have no experience with this kind of work) I had managed to drill four holes and tap three of them. (There were four holes on the other side too with a broken off tap sticking out of, but we won't speak any further about that).

I mounted the Pi and gazed at my creation. It was ok but frankly looked pretty scruffy. I wanted to smarten the block up and also maybe make a recess for the DAC. A quick foray to an enginneers located next door to the office named Oxford Precision Components revealed some very bemused but very helpful engineers. I was quoted £110.00 to mill a recess into the block for the DAC, polish everything, chamfer the edges and get the snapped off Tap out from the underside. For an extra £30.00 they coul anodise it as well. This was pushing the budget up somewhat but the momentum was to great stop now so I handed over the block, some grubby twenty pound notes and held my breath.

At the engineers!

The work was superb. The anodising will keep the shine on the block and the visual impact of the blcok is really stunning. Silly? Absolutely, but great fun.

Mounting the Pi

Back Home!

A trip to maplin got some rubber feet for the base of the block for £4.95 and some rubber washers for the Pi for about £3.95. Certainly this has added up somewhat but no complaints so far.

Can you Even Tell?

Having reached this point I should say that I do not really care to much about the original objectives. This is not supposed to sound flippant, but the purpose of all of this was to do a fun technical project from which I could gain the following:-

Technical Knowledge from people at work.

Get my first Raspberry Pi and do something practical with it.

Experiment with Audio.

So far there has been little actual listening. Was the audio quality any good?

I could start posting detailed assessments of the experiences I had with the 24bit studio master of Peter Gabriel's album, SO. I could have purchased the same album on a cd and brought the MP3 download. Following this I could then try encoding on iTunes and then as Ogg Vorbis and listen to Don't Give Up at least 700 times. To be honest though I have better things to do. I can say that the quality is really good and I like my new stereo. I like being able to play any file format (good luck with that on iTunes) on a very simple device. I can SSH (with Juice SSH ) into the Pi from my phone (there is an app called MPD Droid that is easier to use but would not be the same).

There is this site called mp3ornot that plays you audio files and quizzes you to see if you can tell the difference in their bit rate. I got most of them right but was not perfect though.

A colleague also waded into the fray and wrote a small program which quizzed the listener on bit depths. Playing various versions of the same file at different bit depths. As with the bit rate test, I got some of these correct, but was by no means perfect.

In conclusion. You can definitely use a Raspberry Pi as a stereo. It's not the simplest setup and you have to think "right, I will now go and play music." This was good for making a disciplined mindset. I suspect though that I will end up peering in the window of the stero store in the near future. Perhaps my setup was simply to cheap and I sat down infront of a stereo costing thousands of pounds I will realise the error of my ways. Personally I am thinking 320kps MP3's and good quality headphones are just fine.

Many many awesome tools (with cool names and logos).

The next tool we tried was Trello, and things were very good for a long time. Trello allowed blissful real-time communication with everyone in the team, with displays across machines updating immediately. It uses a Kanban board system of cards in columns which progress from the lefthand side of the screen to the right as progress occurs. We used Trello for all of our projects, including design processes, development, communications with clients and software testing. In addition, we had Trello boards for project overviews.

Trello has a very generous pricing model, and is very friendly to use. It is hard to find fault with Trello considering its simplicity, its ease of use, and how well it can be applied to different facets of project management and development work. However, despite this, Trello is not the perfect tool for everything and problems did begin to emerge.

The key issue manifested itself when we began to use Trello to manage a much larger software development projects, which had several subsequent versions. This required the use of duplicate Trello boards for what was essentially a single project. As anyone who deals with data knows, duplication is a wicked, wicked thing. We did cunning things with the links between Trello boards, but the strain began to show. The main issue here was with bug reporting. Trello is not a bug tracker. Trying to impose feedback rounds and specific states for issues is risky in Trello as the boards allow people to break the layout fairly easily. We realised that we needed a dedicated bug tracker, among other things.

The same issues also started to appear in the use of Gannt charts. Very large projects often required multiple versions of a chart. Henri Gannt would not have approved, and we were well aware of the need for changes to the pipeline we used to run projects.

A colleague commented jokingly a few days ago that we have a lot of different pieces of software all working under the guise of communication and general project tools. I was forced to agree but pointed out that there seems to be no single tool that does everything. If there is, the makers of it usually impose terrifying pricing models, or have broken the tool into two or more different products that also have terrifying pricing models and charge more still to integrate with each other. The issues of lots of separate systems is at least a set of known evils.

Many many requirements

Trying to identify the many requirements of the ideal project management software solution yields the following list (and some known popular products):

Communications:

Email sucks. Trying to use email to resolve design issues, converse and have a meaningful dialogue with lots of people is not a nice experience. It works, after a fashion, but it is a joyless process. Something along the lines of Basecamp would be nicer.

Timelines:

As stated, a nice Gannt tool. Currently we are using Smartsheet. It’s alright, but I’m not overwhelmed by it but Smartsheet works fine. We reviewed Liquid Planner and were blinded by the light of its power and fled back to the darkness. The image below in on a par with the complexity of Liqued Planner. That thing will make you a black hole if you are not careful.

Task Allocation:

Trello is not bad at this, but you quickly begin to allocate multiple columns for the various types of tasks that appear (design, development, testing, general wrangling). The chaos is always seeking a way in. As discussed above, something a little more strict would be good.

Bug Tracking and Testing:

Bug tracking means coding, and coding means repositories and version control, which represents another system. Something geared towards letting developers share code, and also allow issues to be logged so that fixes can be made, would be ideal. In particular, a solution which can manage feedback rounds, and how many times a bug has been addressed would be good.

With a creeping sense of dread, and a very bad meme, it became apparent that what we were looking for -- an all-seeing, all-doing piece of software, which was additionally free and open source -- existed only in myth.

The Quest

With all of the requirements above in mind, I started looking around, thinking that there had to be something which fit. This Wiki page, whilst terrifying, was comprehensive and proved very useful.

Trying to introduce a new system to an established way of working is not a simple decision, especially something as comprehensive as this one. The stakeholders in the business were supportive and the team appreciated that the current software was causing problems. However, asking everyone to demo a huge range of alternatives was unlikely to prove popular and unlikely to be supported. Whatever system was chosen needed to be selected very carefully.

Several systems were tried. Many 30-day trials were started, and lots of people from sales teams kept calling the office asking to speak with me, generally making moderately hard sells.

No one was happy though, and the group consensus was that that it would be very bad to start using a new tool, only to drop it after a few weeks due to a complete lack of interest and uptake.

Phabricator

Phabricator was originally a tool developed by Facebook and was subsequently spun out under a new company named Phacility. The fact that this was both a free, open source software product that we could demo ourselves (with no time limit), and one which is already in use with several notable organisations, pushed Phabricator to the front of the queue.

The range of tools which Phabricator offers is huge, and frankly overwhelming. With that said, it offered pretty much everything we wanted, with the exception of a Gannt Tool. This was something we were prepared to be flexible over, as the remaining features were good.

First and foremost, Phabricator allows a rigid control of tasks and their allocation. Its styling will be familiar to anyone familiar with bug trackers like Bugzilla, and our developers took to it quickly. The backend is MySQL (which upset our Postgres aficionados) and you need to get grimy on the command line to set things up. The people at Phabricator definitely have a sense of humour, and this shows frequently in their documentation.

The UI for Phabricator is comprehensive, but to UX obsessives, it will seem a little older in style than some of the newer interfaces, like that offered by Trello. This should not detract from judging Phabricator. A study of the system and a read of the change logs shows that the development team behind it are very clearly focussed on the core feature support and stability first, with UI coming second. This is not to say that this is not appreciated, because it is, and over the course of several updates we have noticed numerous insightful enhancements to the front-end. Phacility's key objective, though, is to ensure that their system functions well, and is stable.

We were able to link Phabricator to our various Git repositories (we like BitBucket), setup unlimited specific projects, each of which supported its own Wiki (great for project-specific documentation) and use Phabricator to store design files. A great feature we found was that Phabricator also offer Workboards similar in style to Trello (not as powereful or as refined, but usable and very helpful):-

A full review of Phabricator is a topic for another post. The focus here was on the journey we undertook to find and settle on Phabricator as our project management tool of choice. To conclude, however, I can say that we are pleased with the choice we made. Some of the systems Phabricator offers do need further development: a good example would be the internal messaging system. As a company, we have embraced the awesome Slack, which is so good that no one cares that it is a seperate system. We still use Trello as well for a total overview of all projects and it works very nicely this way.

We still need better support for external players, such as allowing specific clients access to specific projects, such as is offered very well by Basecamp. The support for this still seems to need work. The impression we have, though, is that Phabricator will continue to enhance, and that these areas are going to get better in time.

Following our deployment of Phabricator, we learned that Wikipedia had also been engaged in the process to select a new tool which lasted over six months.

Wikipedia describe their choice of Phabricator as allowing them to have

"an opportunity to centralize our various project and product management tools into a single platform, making it easier to follow technical discussions you’re interested in."

The full story on this can be viewed here and is well worth looking at.

Is there a single software solution for project management that does everything a project manager needs? Is there one ring to rule them all? One does not simply come across a single piece of software that just does *everything* in terms of communication, task management, timelines, Gannt charts, writing silly notes to yourself and others, bug reporting, and generally running projects.

Is there a single software solution for project management that does everything a project manager needs? Is there one ring to rule them all? One does not simply come across a single piece of software that just does everything in terms of communication, task management, timelines, Gannt charts, writing silly notes to yourself and others, bug reporting, and generally running projects.

Does such a solution exist? Or do we have to make one?

This account is written by someone new to project management, who has been learning on the job from the outset. It’s been pretty complicated coming into a world that seems fractured along the lines of the traditional way of doing things, and the new, popular agile methodologies. As someone standing at the crossroads with no formal background in this job role, this has proved a little daunting, but also very interesting with many new things to learn.

The situation a new startup finds itself it in.

As a startup with modest origins but an ambitious vision, the need to manage new business, projects and communications effectively was respected from the outset.

We started out using Stickies in OSX; this worked well enough for a while, but eventually chaos broke out. The stickies were local to one machine, each sticky was unrelated to the others, and the list of problems went on and on. A new system was needed, so we engaged in a search for a dedicated project management tool. We engaged in only a brief daliance with our first choice, Pivotal Tracker: this was a powerful and complex tool, but one that ultimately proved fruitless; we needed something less development-oriented, for the sake of our managers.

Many many awesome tools (with cool names and logos).

The next tool we tried was Trello, and things were very good for a long time. Trello allowed blissful real-time communication with everyone in the team, with displays across machines updating immediately. It uses a Kanban board system of cards in columns which progress from the lefthand side of the screen to the right as progress occurs. We used Trello for all of our projects, including design processes, development, communications with clients and software testing. In addition, we had Trello boards for project overviews.

Trello has a very generous pricing model, and is very friendly to use. It is hard to find fault with Trello considering its simplicity, its ease of use, and how well it can be applied to different facets of project management and development work. However, despite this, Trello is not the perfect tool for everything and problems did begin to emerge.

The key issue manifested itself when we began to use Trello to manage a much larger software development projects, which had several subsequent versions. This required the use of duplicate Trello boards for what was essentially a single project. As anyone who deals with data knows, duplication is a wicked, wicked thing. We did cunning things with the links between Trello boards, but the strain began to show. The main issue here was with bug reporting. Trello is not a bug tracker. Trying to impose feedback rounds and specific states for issues is risky in Trello as the boards allow people to break the layout fairly easily. We realised that we needed a dedicated bug tracker, among other things.

The same issues also started to appear in the use of Gannt charts. Very large projects often required multiple versions of a chart. Henri Gannt would not have approved, and we were well aware of the need for changes to the pipeline we used to run projects.

A colleague commented jokingly a few days ago that we have a lot of different pieces of software all working under the guise of communication and general project tools. I was forced to agree but pointed out that there seems to be no single tool that does everything. If there is, the makers of it usually impose terrifying pricing models, or have broken the tool into two or more different products that also have terrifying pricing models and charge more still to integrate with each other. The issues of lots of separate systems is at least a set of known evils.

Many many requirements

Trying to identify the many requirements of the ideal project management software solution yields the following list (and some known popular products):

Communications:

Email sucks. Trying to use email to resolve design issues, converse and have a meaningful dialogue with lots of people is not a nice experience. It works, after a fashion, but it is a joyless process. Something along the lines of Basecamp would be nicer.

Timelines:

As stated, a nice Gannt tool. Currently we are using Smartsheet. It’s alright, but I’m not overwhelmed by it but Smartsheet works fine. We reviewed Liquid Planner and were blinded by the light of its power and fled back to the darkness. The image below in on a par with the complexity of Liqued Planner. That thing will make you a black hole if you are not careful.

Task Allocation:

Trello is not bad at this, but you quickly begin to allocate multiple columns for the various types of tasks that appear (design, development, testing, general wrangling). The chaos is always seeking a way in. As discussed above, something a little more strict would be good.

Bug Tracking and Testing:

Bug tracking means coding, and coding means repositories and version control, which represents another system. Something geared towards letting developers share code, and also allow issues to be logged so that fixes can be made, would be ideal. In particular, a solution which can manage feedback rounds, and how many times a bug has been addressed would be good.

With a creeping sense of dread, and a very bad meme, it became apparent that what we were looking for -- an all-seeing, all-doing piece of software, which was additionally free and open source -- existed only in myth.

The Quest

With all of the requirements above in mind, I started looking around, thinking that there had to be something which fit. This Wiki page, whilst terrifying, was comprehensive and proved very useful.

Trying to introduce a new system to an established way of working is not a simple decision, especially something as comprehensive as this one. The stakeholders in the business were supportive and the team appreciated that the current software was causing problems. However, asking everyone to demo a huge range of alternatives was unlikely to prove popular and unlikely to be supported. Whatever system was chosen needed to be selected very carefully.

Several systems were tried. Many 30-day trials were started, and lots of people from sales teams kept calling the office asking to speak with me, generally making moderately hard sells.

No one was happy though, and the group consensus was that that it would be very bad to start using a new tool, only to drop it after a few weeks due to a complete lack of interest and uptake.

Phabricator

Phabricator was originally a tool developed by Facebook and was subsequently spun out under a new company named Phacility. The fact that this was both a free, open source software product that we could demo ourselves (with no time limit), and one which is already in use with several notable organisations, pushed Phabricator to the front of the queue.

The range of tools which Phabricator offers is huge, and frankly overwhelming. With that said, it offered pretty much everything we wanted, with the exception of a Gannt Tool. This was something we were prepared to be flexible over, as the remaining features were good.

First and foremost, Phabricator allows a rigid control of tasks and their allocation. Its styling will be familiar to anyone familiar with bug trackers like Bugzilla, and our developers took to it quickly. The backend is MySQL (which upset our Postgres aficionados) and you need to get grimy on the command line to set things up. The people at Phabricator definitely have a sense of humour, and this shows frequently in their documentation.

The UI for Phabricator is comprehensive, but to UX obsessives, it will seem a little older in style than some of the newer interfaces, like that offered by Trello. This should not detract from judging Phabricator. A study of the system and a read of the change logs shows that the development team behind it are very clearly focussed on the core feature support and stability first, with UI coming second. This is not to say that this is not appreciated, because it is, and over the course of several updates we have noticed numerous insightful enhancements to the front-end. Phacility's key objective, though, is to ensure that their system functions well, and is stable.

We were able to link Phabricator to our various Git repositories (we like BitBucket), setup unlimited specific projects, each of which supported its own Wiki (great for project-specific documentation) and use Phabricator to store design files. A great feature we found was that Phabricator also offer Workboards similar in style to Trello (not as powereful or as refined, but usable and very helpful):-

A full review of Phabricator is a topic for another post. The focus here was on the journey we undertook to find and settle on Phabricator as our project management tool of choice. To conclude, however, I can say that we are pleased with the choice we made. Some of the systems Phabricator offers do need further development: a good example would be the internal messaging system. As a company, we have embraced the awesome Slack, which is so good that no one cares that it is a seperate system. We still use Trello as well for a total overview of all projects and it works very nicely this way.

We still need better support for external players, such as allowing specific clients access to specific projects, such as is offered very well by Basecamp. The support for this still seems to need work. The impression we have, though, is that Phabricator will continue to enhance, and that these areas are going to get better in time.

Following our deployment of Phabricator, we learned that Wikipedia had also been engaged in the process to select a new tool which lasted over six months.

Wikipedia describe their choice of Phabricator as allowing them to have

"an opportunity to centralize our various project and product management tools into a single platform, making it easier to follow technical discussions you’re interested in."

The full story on this can be viewed here and is well worth looking at.

It turns out that his latest invention is a primitive form of modem. This modem used two audio tones, denoting 0 and 1, which represented a message (plain text initially). This text was encoded as bytes (Modulated) and sent warbling out of an Apple iMac’s audio socket, to be sent through the air, as audio which was detected by a microphone receiver and decoded (demodulated). Slowly, the famous words did appear.

o World! Hello World! E

At a stunning speed of approximately 8 bits per second.

People in the office were told to keep quiet, and the air-con was switched off, as this thing called ‘noise’ upset the signal and made erroneous characters appear in place the intended letters.

What is a Modem?

Everybody knows the answer to this. The box thing that lets you access Facebook and use Bit Torrent (for legitimate purposes). It used to make odd noises and take its time about things; these days, it is quiet and very fast. It might have Virgin written on the side of it, but how do they actually work? A modem is an abbreviation of Modulator-Demodulator. These devices encode digital information into an analog carrier signal (in the initial example here, that analog signal was a sound wave moving across the room). The modem tasked to receive the data, demodulates this analog signal and converts the data back to digital information.

The history of the modem is a large topic. What has made this research project interesting is how quickly Nat (the Technical Director) has been enhancing the original idea and introducing new features which follow a similar trajectory to the evolutionary path of original modems.

The next step was to change the analog channel from standard audio to a radio frequency. This was where the Baofeng came into play. The digital data was converted using Javascript. The resulting audio sent out of the iMac’s headphone jack and into the Baofeng which transmitted it. Another Baofeng and a Macbook Pro at the other end reversed the process and the message was displayed to the screen as it was received.

Go faster dammit!

The initial system was not particularly quick managing a blistering speed of about 8bits per second. This data was sent in a linear sequence, one piece of data followed after the other in a neat orderly queue. The next upgrade was to research increasing this and the next iteration is considerably more complex.

The image here shows 8 dedicated frequencies, each one of which can hold a byte of data. In the new transmission, a clock is used to send a time stamp. The receiver continually checks the this stamp and if a new incremented stamp appears then it knows that new data has arrived. This allows the transmitter to control the rate at which the data is sent. The audio for this ranges from creepy fairy tale weird to something that Aphex Twin or Ryoji Ikeda would could up with. There is even a basic error checking algorithm in place to keep the time stamp correct.

Could this be THE-FUTURE?

An idea being bounced around the lab is to buy a transmission license and host our website via radio transmission. Something along the lines of a Videotex text system like Prestel or Minitel but via radio. Practical? Perhaps not, amazing? Most definitely.

Remember those modem things that made screatchy noises like mutant mice? Shockingly, some people will not. Getting over this outrage, -- before the advent of phone data and broadband, these were the devices that let you play Command and Conquer over the phone line with your friend, download an MP3, or maybe several if you left it on all night…

Remember those modem things that made screatchy noises like mutant mice? Shockingly, some people will not. Getting over this outrage, -- before the advent of phone data and broadband, these were the devices that let you play Command and Conquer over the phone line with your friend, download an MP3, or maybe several if you left it on all night…

Recently, our technical director ordered a new Baofeng Transceiver for himself. After finding fun things to listen too, and bizarre transmissions of people counting out strange numbers, we started hearing odd noises coming from his ‘ordered’ and ‘efficient’ ‘work area’.

It turns out that his latest invention is a primitive form of modem. This modem used two audio tones, denoting 0 and 1, which represented a message (plain text initially). This text was encoded as bytes (Modulated) and sent warbling out of an Apple iMac’s audio socket, to be sent through the air, as audio which was detected by a microphone receiver and decoded (demodulated). Slowly, the famous words did appear.

o World! Hello World! E

At a stunning speed of approximately 8 bits per second.

People in the office were told to keep quiet, and the air-con was switched off, as this thing called ‘noise’ upset the signal and made erroneous characters appear in place the intended letters.

What is a Modem?

Everybody knows the answer to this. The box thing that lets you access Facebook and use Bit Torrent (for legitimate purposes). It used to make odd noises and take its time about things; these days, it is quiet and very fast. It might have Virgin written on the side of it, but how do they actually work? A modem is an abbreviation of Modulator-Demodulator. These devices encode digital information into an analog carrier signal (in the initial example here, that analog signal was a sound wave moving across the room). The modem tasked to receive the data, demodulates this analog signal and converts the data back to digital information.

The history of the modem is a large topic. What has made this research project interesting is how quickly Nat (the Technical Director) has been enhancing the original idea and introducing new features which follow a similar trajectory to the evolutionary path of original modems.

The next step was to change the analog channel from standard audio to a radio frequency. This was where the Baofeng came into play. The digital data was converted using Javascript. The resulting audio sent out of the iMac’s headphone jack and into the Baofeng which transmitted it. Another Baofeng and a Macbook Pro at the other end reversed the process and the message was displayed to the screen as it was received.

Go faster dammit!

The initial system was not particularly quick managing a blistering speed of about 8bits per second. This data was sent in a linear sequence, one piece of data followed after the other in a neat orderly queue. The next upgrade was to research increasing this and the next iteration is considerably more complex.

The image here shows 8 dedicated frequencies, each one of which can hold a byte of data. In the new transmission, a clock is used to send a time stamp. The receiver continually checks the this stamp and if a new incremented stamp appears then it knows that new data has arrived. This allows the transmitter to control the rate at which the data is sent. The audio for this ranges from creepy fairy tale weird to something that Aphex Twin or Ryoji Ikeda would could up with. There is even a basic error checking algorithm in place to keep the time stamp correct.

Could this be THE-FUTURE?

An idea being bounced around the lab is to buy a transmission license and host our website via radio transmission. Something along the lines of a Videotex text system like Prestel or Minitel but via radio. Practical? Perhaps not, amazing? Most definitely.

Text classification is not limited to just two categories, though: it can be used with many categories, in order to produce a more broadly capable classification program. I ended up focussing my research for this project on text classification, as it sounded like something I could apply to many projects in the future.

Text classification algorithms

Text classification, as a problem, is fairly straightforward; implementing a technique to solve the classification problem is where things get complicated. There are many techniques for classifying text, but arguably, the most straightforward is naive Bayes classification. In essence, a naive Bayes classifier counts and compares the frequency of words, and suggests probabilities that a given text is of any particular category based on how word frequencies in the text compare to word frequencies in other texts of known classification. Because it is quite a simple algorithm, there are many implementations of naive Bayes classification out on the Web, for many different languages. Natural is an NLP library written in JavaScript for use in Node.js apps.

Using Natural

Simply passing a bunch of files into a naive Bayes classifier and expecting magic to happen is, as I discovered, really naive. The classifier is called ‘naive’ for a reason: on its own, it can’t distinguish between words, give words different weightings, or make any sort of ‘intelligent’ guesswork. Given sufficiently small data sets -- on the order of three or four categories, and a corpus of a few hundred words -- a naive Bayes classifier can work quite well. Once the data set grows beyond these thresholds, however, probabilities begin to level off rapidly, and the classifier becomes unable to return useful results. As the size of the corpus increases, common words become more common, and enter into multiple categories, meaning that the classifier struggles to determine anything useful other than the fact that a given document relates, to some degree, to just about every category the classifier knows about.

Natural and friends

Pre-processing, I learnt, is vital to making a naive Bayes classifier effective. Doing things like breaking words down into common lemmas, combining categories into broader categories based on common metadata, and picking out known keywords to give them an exceptional weighting, helped make the classifier much more useful. Wordpos, Stopwords, and a little regex helped me break texts down into useful lists of nouns in their citation form, the form of a word before inflection.

However, there is still much to do, and many other classification techniques to investigate. Additionally, there is much research to do into building a multi-threaded text classifier, which is another story altogether.

Whenever you read an article on, say, the BBC’s website, the website will present you with a list of other articles which are related to the one you’re reading.

Whenever you read an article on, say, the BBC’s website, the website will present you with a list of other articles which are related to the one you’re reading.

I never used to pay it any mind -- I just used that list for my further reading -- but a new project has come in recently and made me reconsider how these relations are actually drawn up. This new project requires a mechanism for drawing up lists of items related to a given news article, based on the metadata attached to each item -- not just articles related to the given article. As it turns out, there are lots of ways one might draw up a set of many-to-one correlations like a ‘related articles’ list, and which ones you use depends on the requirements of the task at hand.

Comparing text with natural language processing

Very broadly speaking, natural language processing is the discipline of creating computer programs which aim to work with human languages in a human-like manner. It is a field of intense research today, and help drives the practice of machine learning: the creation and refinement of computer programs which can improve their own mechanisms. With NLP, it is possible to create programs which can ‘classify’ a body of text, placing it in a category based on how it compares to other texts of known classification. For example, classification is occasionally used in spam filtering: spam filters can classify a given email as either ‘spam’ or ‘not spam’ based on how the text compares to a corpus of known spam -- generally generated and updated by email users marking emails as spam.

Text classification is not limited to just two categories, though: it can be used with many categories, in order to produce a more broadly capable classification program. I ended up focussing my research for this project on text classification, as it sounded like something I could apply to many projects in the future.

Text classification algorithms

Text classification, as a problem, is fairly straightforward; implementing a technique to solve the classification problem is where things get complicated. There are many techniques for classifying text, but arguably, the most straightforward is naive Bayes classification. In essence, a naive Bayes classifier counts and compares the frequency of words, and suggests probabilities that a given text is of any particular category based on how word frequencies in the text compare to word frequencies in other texts of known classification. Because it is quite a simple algorithm, there are many implementations of naive Bayes classification out on the Web, for many different languages. Natural is an NLP library written in JavaScript for use in Node.js apps.

Using Natural

Simply passing a bunch of files into a naive Bayes classifier and expecting magic to happen is, as I discovered, really naive. The classifier is called ‘naive’ for a reason: on its own, it can’t distinguish between words, give words different weightings, or make any sort of ‘intelligent’ guesswork. Given sufficiently small data sets -- on the order of three or four categories, and a corpus of a few hundred words -- a naive Bayes classifier can work quite well. Once the data set grows beyond these thresholds, however, probabilities begin to level off rapidly, and the classifier becomes unable to return useful results. As the size of the corpus increases, common words become more common, and enter into multiple categories, meaning that the classifier struggles to determine anything useful other than the fact that a given document relates, to some degree, to just about every category the classifier knows about.

Natural and friends

Pre-processing, I learnt, is vital to making a naive Bayes classifier effective. Doing things like breaking words down into common lemmas, combining categories into broader categories based on common metadata, and picking out known keywords to give them an exceptional weighting, helped make the classifier much more useful. Wordpos, Stopwords, and a little regex helped me break texts down into useful lists of nouns in their citation form, the form of a word before inflection.

However, there is still much to do, and many other classification techniques to investigate. Additionally, there is much research to do into building a multi-threaded text classifier, which is another story altogether.

Graphing

We needed to look into the data produced by this sampling mechanism, so we could create an algorithm to detect the whistles. For this, we used GraphView.

This was the output from ambient room noise. It fit with what we expected from ambient noise: a large DC component and gradually smaller high frequency component. The low part was useless, so we cut out everything below 500 Hz. To find the bucket index from a frequency:

frequency / ((float) mSampleRate / mFFTSize));

This was the output from a whistle: an obvious narrow spike, many times louder than the other frequencies. Below was the output from general noise:

Analysis

To detect a whistle programmatically, the plan was to find a spike confined to a certain frequency -- 500Hz -- a number of times louder than the amplitude at any other frequency. This approach mostly worked, but could be triggered by other sounds in the 500 Hz range. To make the test more specific, we needed to check if the tip of the spike was very narrow. This was done by fitting a triangle of two lines around the spike. If any buckets crossed this line, there was a good chance that the spike was just noise, rather than a whistle. Below is a test fitting the parameters. The main frequency test is the blue box, and the triangle test is in yellow:

To make the test even more specific, and preclude more false positives, the number of zero crossings had to be counted. Very noisy sounds can produce narrow spikes, but also produce many more zero crossings than a tone. We looked for 500 to 2800 crossings per second to confirm the whistle:

Conclusion

Thanks to the techniques above, detecting whistles whilst the phone is on the desk can be done with almost perfect accuracy. However, if the phone is hidden away, the microphone is covered up, or the phone is enclosed in a case, the detection rate drops sharply. This is to be expected: after all, trying to listen to a conversation whilst stuffed in a bag with your fingers in your ears is equally difficult. Likewise, the false positive trigger rate increases substantially if music is playing, or a microwave is microwaving. Our next attempt at noise detection might rely on neural networks, or some other technique to make the process ‘smarter’.

We’ve been on an investigative spree lately, and one of the products of our research is a little experiment in signal processing on Android. This Android mobile app runs in the background and responds to the sound of a whistle, so that, should you lose your phone, you can simply whistle, and have the phone make a noise, to help you find it. Detecting a whistle is a fairly involved task, and developing an app to do so yielded some of the following insights.

We’ve been on an investigative spree lately, and one of the products of our research is a little experiment in signal processing on Android. This Android mobile app runs in the background and responds to the sound of a whistle, so that, should you lose your phone, you can simply whistle, and have the phone make a noise, to help you find it. Detecting a whistle is a fairly involved task, and developing an app to do so yielded some of the following insights.

Sampling

To start working with sounds, we used a discrete Fourier transform to convert the signal into the frequency domain. We used jTransforms for this.

Graphing

We needed to look into the data produced by this sampling mechanism, so we could create an algorithm to detect the whistles. For this, we used GraphView.

This was the output from ambient room noise. It fit with what we expected from ambient noise: a large DC component and gradually smaller high frequency component. The low part was useless, so we cut out everything below 500 Hz. To find the bucket index from a frequency:

frequency / ((float) mSampleRate / mFFTSize));

This was the output from a whistle: an obvious narrow spike, many times louder than the other frequencies. Below was the output from general noise:

Analysis

To detect a whistle programmatically, the plan was to find a spike confined to a certain frequency -- 500Hz -- a number of times louder than the amplitude at any other frequency. This approach mostly worked, but could be triggered by other sounds in the 500 Hz range. To make the test more specific, we needed to check if the tip of the spike was very narrow. This was done by fitting a triangle of two lines around the spike. If any buckets crossed this line, there was a good chance that the spike was just noise, rather than a whistle. Below is a test fitting the parameters. The main frequency test is the blue box, and the triangle test is in yellow:

To make the test even more specific, and preclude more false positives, the number of zero crossings had to be counted. Very noisy sounds can produce narrow spikes, but also produce many more zero crossings than a tone. We looked for 500 to 2800 crossings per second to confirm the whistle:

Conclusion

Thanks to the techniques above, detecting whistles whilst the phone is on the desk can be done with almost perfect accuracy. However, if the phone is hidden away, the microphone is covered up, or the phone is enclosed in a case, the detection rate drops sharply. This is to be expected: after all, trying to listen to a conversation whilst stuffed in a bag with your fingers in your ears is equally difficult. Likewise, the false positive trigger rate increases substantially if music is playing, or a microwave is microwaving. Our next attempt at noise detection might rely on neural networks, or some other technique to make the process ‘smarter’.