Category: Biometrics

A Year’s Work Condensed into One Hour

Last week, I presented a webinar through the continuing education website, AudiologyOnline, for a number of audiologists around the country. The same week the year prior, I launched this blog. So, for me, the webinar was basically a culmination of the past year’s blog posts, tweets and videos that I’ve generated, distilled into a one-hour presentation. By having to consolidate so many things I have learned into a single hour, it helped me to choose the things that I thought were most pertinent to the hearing healthcare professional.

Some of My Takeaways

Why This Time is Different

The most rewarding and fulfilling part of this process has been to see the way things have unfolded and the technological progress that has been made both with the hardware and software of the in-the-ear devices and also the rate at which the emerging use cases for said devices are maturing. During the first portion of my presentation, I laid out why I feel this time is different from any previous era where disruption might feel as if it’s on the doorstep, yet doesn’t come to pass, and that’s largely due to the fact that the underlying technology has matured so much of late.

I would argue that the single biggest reason why this time is different is due to the smartphone supply chain, or as I stated in my talk – The Peace Dividends of the Smartphone War (props to Chris Anderson for describing this phenomenon so eloquently). Through the massive, unending proliferation of smartphones around the world, the components that comprise the smartphone (which also comprise pretty much all consumer technology) have gotten incredibly cheap and accessible.

Due to these economies of scale, there is a ton of innovation occurring with each component (sensors, processors, batteries, computer chips, microphones, etc). This means more companies than ever, from various segments, are competing to set themselves apart in any way they can in their respective industries, and therefore are providing innovative breakthroughs for the rest of the industry to benefit from. So, hearing aids and hearables are benefiting from breakthroughs occurring in smart speakers and drones because much of the innovation can be reaped and applied across the whole consumer technology space, rather than just limited to one particular industry.

Learning from Apple

Another point that I really tried to hammer home is the fact that our “connected” in-the-ear devices are now considered “exotropic” meaning that they appreciate in value over time. Connectivity enables the ability for the device to enhance itself, through software/firmware updates and app integration, even after the point-of-sale; much like a smartphone. So, in a similar fashion to our hearing aids and hearables reaping the innovation from other consumer technology innovation occurring elsewhere, connectivity does a similar thing – it enables network effects.

If you study Apple and examine why the iPhone was so successful, you’ll see that its success was largely predicated on the iOS app store, which served as a marketplace that connected developers with users. The more customers (users) there were, the more incentive there was to come and sell your goods as a merchant (developers) in the marketplace (app store). Therefore the marketplace grew and grew as the two sides constantly incentivized one another to grow, which compounded the growth.

That phenomenon I just described is called two-sided network effects and we’re beginning to see the same type of network effects take hold with our body-worn computers. That’s why a decent portion of my talk was spent around the Apple Watch. Wearables, hearables or smart hearing aids – they’re all effectively the same thing: a body-worn computer. Much of the innovation and use cases beginning to surface from the Apple Watch can be applied to our ear-worn computers too. Therefore, Apple Watch users and hearable users comprise the same user-base to an extent (they’re all body computers), which means that developers creating new functionality and utility for the Apple Watch might indirectly (or directly) be developing applications for our in-the-ear devices too. The utility and value of our smart hearing aids and hearables will just continue to rise, long after the patient has purchased their device, making for a much stronger value proposition.

Smart Assistant Usage will be Big

One of the most exciting use cases that I think is on the cusp of breaking through in a big way in this industry is smart assistant integration into the hearing aids (already happening in hearables). I’ve attended multiple conferences dedicated to this technology and have posted a number of blogs on smart assistants and the Voice user interface so, I don’t want to rehash every reason why I think this is going to be monumental for the product offering of this industry, but the main takeaway is this: the group that is adopting this new user interface the fastest is the same cohort that makes up the largest contingent of hearing aid wearers – the older adults. The reason for this fast adoption, I believe, is because there are few limitations to speaking and issuing commands/controlling your technology with your voice. This is why Voice is so unique; It’s conducive to the full age spectrum from kids to older adults, while something like the mobile interface isn’t particularly conducive to older adults who might have poor eyesight, dexterity or mobility.

This user interface and the smart assistants that mediate the commands are incredibly primitive today relative to what they’ll mature to become. Jeff Bezos famously quipped in 2016 in regard to this technology that, “It’s the first inning. It might even be the first guy’s up at bat.” Even in the technology’s infancy, the adoption of smart speakers among the older cohort is surprising and leads one to believe that they’re beginning to grow a dependency on smart assistant mediated voice commands, rather than tap, touch and swiping on their phones. Once this becomes integrated into hearing aids, patients will be able to conduct many of the same functions that you or I do with our phones, simply by asking their smart assistant to do that for them. One’s hearing aid serving the role (to an extent) of their smartphone further strengthens the value proposition of the device.

Biometric Sensors

If there’s one set of use cases that I think can rival the overall utility of Voice, it would be the implementation of biometric sensors into ear-worn devices. To be perfectly honest, I am startled how quickly this is already beginning to happen, with Starkey making the first move with the introduction of a gyroscope and accelerometer into its Livio AI hearing aid allowing for motion tracking. These sensors support the use cases of fall detection and fitness tracking. If “big data” was the buzz of the past decade, then “small data”, or personal data, will be the buzz of the next 10 years. Life insurance companies like John Hancock are introducing policies built around fitness data, converting this feature from a “nice to have” to a “need to have” for those that need to be wearing an-all day data recorder. That’s exactly the role the hearing aid is shaping up to serve – a data collector.

The type of data being recorded is really only limited to the type of sensors that are embedded into the device, and we’ll soon see the introduction of PPG sensors, as Valencell and Sonion plan to release a commercially available sensor small enough to fit into a RIC hearing available in 2019 for OEMs to implement into their offerings. These light-based sensors are currently built into the Apple Watch and provide the ability to track your hear rate. There have been a multitude of folks who have cited their Apple Watch for saving their life, as they were alerted to abnormal spikes in their resting heart rates, which were discovered to be life-threatening abnormalities in their cardiac activity. So, we’re talking about hearing aids acting as data collectors and preventative health tools that might alert the hearing aid wearer to a life-threatening condition.

As these type of sensors continue to shrink in size and become more capable, we’re likely to see more types of data that can be harvested, such as blood pressure and other cardiac data from the likes of an EKG sensor. We could potentially even see a sensor that’s capable of gathering glucose levels in a non-invasive way, which would be a game-changer for the 100 million people with diabetes or pre-diabetes. We’re truly at the tip of iceberg with this aspect of the devices, and this would mean that the hearing healthcare professional is a necessary component (fitting the “data collector”) for the cardiologist or physician that needs their patient’s health data monitored.

More to Come

This is just some of what’s happened across the past year. One year! I could write another 1500 words on interesting developments that have occurred this year, but these are my favorites. There is seemingly so much more to come with this technology and as these devices continue their computerized transformation into looking like something more akin to the iPhone, there’s no telling what other use cases might emerge. As the movie Field of Dreams so famously put it, “If you build it, they will come.” Well, the user base of all our body-worn computers continues to grow and further enticing the developers to come make their next big pay day. I can’t wait to see what’s to come in year two and I fully plan on ramping up my coverage of all the trends converging around the ear. So stay tuned and thank you to everyone who has supported me and read this blog over this first year (seriously, every bit of support means a lot to me).

The Next Frontier

In my first post back in 2017, I wrote that the inspiration for creating this blog was to provide an ongoing account of what happens after we connect our ears to the internet (via our smartphones). What new applications and functionality might emerge when an audio device serves as an extension of one’s smartphone? What new hardware possibilities can be implemented in light of the fact that the audio device is now “connected?” This week, Starkey moved the ball forward with changing the narrative and design around what a hearing aid can be with the debut of its new Livio AI hearing aid.

Livio AI embodies the transition to a multi-purpose device, akin to our hearables, with new hardware in the form of embedded sensors not seen in hearing aids to date, and companion apps to allow for more user control and increased functionality. Much like Resound firing the first shot in the race to create connected hearing aids with the first “Made for iPhone” hearing aid, Starkey has fired the first shot in what I believe will be the next frontier, which is the race to create the most compelling multi-purpose hearing aid.

With the OTC changes fast approaching, I’m of the mind that one way hearing healthcare professionals will be able to differentiate in this new environment is by offering exceptional service and guidance around unlocking all the value possible from these multi-purpose hearing aids. This spans the whole patient experience, from the way the device is programmed and fit, to educating the patient around how to use the new features. Let’s take a look what one of the first forays into this arena looks like by breaking down the Livio AI hearing aid.

Livio AI’s Thrive App

Thrive is a companion app that can be downloaded to use with Livio AI, and I think it’s interesting for a number of reasons. For starters, what I find useful about this app is that it’s Starkey’s attempt to combat the potential link of cognitive decline and hearing loss in our aging population. It does this by “gamifying” two sets of metrics that roll into your 200 point “Thrive” score that’s meant to be achieved regularly.

The first set of metrics is geared toward measuring your body activity, comprised around data collected through sensors to gauge your daily movement. By embedding a gyroscope and accelerometer into the hearing aid, Livio AI is able to track your movement, so that it can monitor some of the same type of metrics as an Apple Watch or Fitbit. Each day, your goal is to reach 100 “Body” points by moving, exercising and standing up throughout the day.

The next bucket of metrics being collected is entirely unique to this hearing aid and is based around the way in which you wear your hearing aids. This “brain” category measures the daily duration the user wears the hearing aid, the amount of time spent “engaging” other people (which is important for maintaining a healthy mind), and the various acoustic environments that the user experiences each day.

So, through gamification, the hearing aid wearer is encouraged to live a healthy lifestyle and use their hearing aids throughout the day in various acoustic settings, engaging in stimulating conversation. To me, this will serve as a really good tool for the audiologist to ensure that the patient is wearing the hearing aid to its fullest. Additionally, for those who are caring for an elderly loved one, this can be a very effective way to track how active your loved one’s lifestyle is and whether they’re actually wearing their hearing aids. That’s the real sweet spot here, as you can quickly pull up their Thrive score history to get a sense of what your aging loved one is doing.

Healthkit SDK Integration

Apple Health App without Apple Watch

Apple Health App With Apple Watch

Another very subtle thing about the Thrive app that has some serious future applications is that fact that Starkey has integrated Thrive’s data into Apple’s Healthkit SDK. This is one of the only third-party device integrations that I know of to be integrated into this SDK at this point. The image above is a side-by-side comparison of what Apple’s Health app looks like with and without Apple Watch integration. As you can see, the image on the right displays the biometric data that was recorded from my Watch and sent to my Health app. Livio AI’s data will be displayed in the same fashion.

So, what? Well, as I wrote about previously, the underlying reason this is a big deal, is that Apple has designed its Health app with future applications in mind. In essence, Apple appears to be aiming to make the data easily transferable, in an encrypted manner (HIPAA-friendly), across Apple-certified devices. So, it’s completely conceivable that you’d be able to share the biometric data being ported into your Health app (i.e. Livio AI data) and share it with a medical professional.

For an audiologist, this would mean that you’d be able to remotely view the data, which might help to understand why a patient is having a poor experience with their hearing aids (they’re not even wearing them). Down the line, if hearing aids like Livio were to have more sophisticated sensors embedded, such as a PPG sensor to monitor blood pressure, or a sensor that can monitor your body temperature (as the tympanic membrane radiates body heat), you’d be able to transfer a whole host of biometric data to your physician to help them assess what might be wrong with you if you’re feeling ill. As a hearing healthcare professional, there’s a possibility that in the near future, you will be dispensing a device that is not only invaluable to your patient but to their physician as well.

Increased Intelligence

Beyond the fitness and brain activity tracking, there are some other cool use cases that come packed with this hearing aid. There’s a language translation feature that includes 27 languages, which is done in real-time through the Thrive app and is powered through the cloud (so you’ll need to have internet access to use). This seems to draw from the Starkey-Bragi partnership which was formed a few years ago, which was a good indication that Starkey was looking to venture down the path of making a feature-rich hearing aid with multiple uses.

Another aspect of the smartphone that Livio AI leverages is the smartphone’s GPS. This allows the user to use their smartphone to locate their hearing aids if they have gone missing. Additionally, the user can set “memories” to adjust their hearing aid settings based on the acoustic environment they’re in. If there’s a local coffee shop or venue that the user frequents, where they’ll want their hearing aids to have a boost or turned down in some fashion, “memories” will automatically adjust the settings based on the pre-determined GPS location.

If you “pop the hood” of the device and take a look inside, you’ll see that the components comprising the hearing aid have been significantly upgraded too. Livio AI boasts triple the computing power and double the local memory capacity as the previous line of Starkey hearing aids. This should come as no surprise, as the most impressive innovation happening with ear-worn devices is what’s happening with the components inside the devices, due to the economies of scale and massive proliferation of smartphones. This increase in computing power and memory capacity is yet another example of the, “peace dividends of the smartphone war.” This type of computing power allows for a level of machine learning (similar to Widex’s Evoke) to adjust to different sound environments based on all the acoustic data that Starkey’s cloud is processing.

The Race is On

As I mentioned at the beginning of this post, Starkey has initiated a new phase of hearing aid technology and my hope is that it spurs the other four manufacturers to follow suit, in the same way that everyone followed Resound’s lead with bringing to market “connected” hearing aids. Starkey CTO, Achin Bohwmik, believes that traditional sensors and AI will do to the hearing aid what Apple did to the phone, and I don’t disagree.

As I pointed out in a previous post, the last ten years of computing was centered around porting the web to the apps in our smartphone. The next wave of computing appears to be a process of offloading and unbundling the “jobs” that our smartphone apps represent, to a combination of wearables and voice computing. I believe the ear will play a central role in this next wave of computing, largely due to the fact that it serves as a perfect position for an ear-worn computer with biometric sensors equipped that doubles as a home to our smart assistant(s) which will mediate our voice commands. This is the dawn of a brand new day and I can’t help but feel very optimistic about the future of this industry and hearing healthcare professionals who embrace these new offerings. In the end however, it’s the patient who will benefit the most and that’s a good thing when so many people could and should be treating their hearing loss.

Valencell + Sonion

News broke last week that Sonion, a leading components manufacturer for ear-worn devices, had led a $10.5 million Series E round of investment into Valencell, a pioneer in the biometric sensor manufacturing industry. In exchange for its investment into Valencell, Sonion now has exclusivity on Valencell’s bio-sensor technology for the ear-level space. Sonion plans to integrate these biometric sensors into the component packages that they’re developing for hearing aid and hearable manufacturers. This new strategic partnership will help Valencell grow its footprint by leveraging Sonion’s distribution network of ear-worn devices, and ultimately expose more end-users to Valencell’s biometric sensor technology.

The March toward the Ear

The type of sensor that Valencell develops is referred to as an optical PPG (photoplethsymography) sensor. It can record measurements such as your heart rate using a light to illuminate the skin and measure changes in light absorption. It detects the volume of blood and the pressure of the pulse based on the light absorption, allowing for an accurate heart rate measurement. If you’ve used an Apple Watch and have used the Heart Rate app, you’ll notice that a green light on the underside of the Watch lights up. That’s a PPG sensor.

There are a number of reasons that companies like Valencell are so keen on embedding these type of sensors in our ears. Valencell president, Dr. Steven LeBouef, made the case why the ear is the most practical spot on the body to record biometric data:

Due to its unique physiology, the ear is one of the most accurate spots on the body to measure physiological information,

One can measure more biometrics at the ear than any other single location on the body,

Environmental sensors at the ear (exposed to the environment at all times) can assess what airborne vapors and particles one is breathing and expiring, and

So in essence, the ear is the most precise, most robust, and most exposed area on the body for recording this information, all while serving as a location where we have already become accustomed to wearing technology. So, this is a no-brainer, right? Why aren’t our ear-worn devices already using these sensors ?

The Challenges of Really Small Devices

As I wrote about in my last post around the innovation happening in our hearables and hearing aids, it can be rather daunting to try and cram all this technology into really small devices that fit in our ears. Battery life is always a challenge because with small devices, there’s only so much power that can go around. Valencell has been able to lower the power consumption of its sensors by a multitude of 25X over the past 5 years, but will still need to find ways to drop that even lower in order for these sensors not to be viewed as major battery drains. Price is another obstacle, as these sensors currently add too much of an incremental manufacturing cost not feasible for the lower-cost end of the market.

That’s why this partnership is so exciting to me. What Sonion really brings to the table here is an expertise in reduction. Reduction in size, price and power consumption, which have been three of the biggest obstacles in making the embedding of these sensors feasible into ear-worn devices.

The Benefits of Putting Biometric Sensors in our Ears

There are two sets of use cases that are currently clear to me around biometric data. The first would be fitness applications. Just think of your hearing aid or earbuds capturing the same fitness data that an Apple Watch or Fitbit records. I think this set of applications gets really interesting when you layer in smart assistants, which can be used to guide or coach the user, but that’s another post for another day. For now, let me just point out that whatever you can do with your wrist-worn wearable today from a data collection standpoint, would seemingly be feasible with our ear-worn wearables that are around the corner.

The next, and much more exciting use case, is around preventative health. If you just search, “Apple Watch saves life,” you’d be amazed at all the people out there who were alerted by their Apple Watch that there was something funky going on with the data that was being logged. Here are a few examples:

18 year old girl is sitting in church when her Apple Watch told her to seek medical attention due to her sitting heart rate spiking to 120-130 beats per minute. The doctors found out she was experiencing kidney failure.

32 year old man begins bleeding out of the blue, is prompted by his Apple Watch to seek immediate medical attention, he then calls 911 and by the time the ambulance arrives he has lost 80% of the blood. An ulcer had unknowingly burst in his body and doctors were cited as saying that the Watch notification gave him just enough time to call for help.

“After an electrocardiograph machine indicated something was wrong, doctors conducted tests and discovered that two out of his three main coronary arteries were completely blocked, with the third 90 percent blocked.”

And of course, who can forget this guy:

This is a big part of why I am so bullish on the future of ear-worn devices. I imagine that we’ll see tons of stories like these emerge when these same type of sensors that are currently in the Apple Watch start making their way into our ear-worn devices. We know that the ear is a perfect place to record this type of data, and there’s no new adoption curve for these devices – we’re already wearing tons of things in our ears!

Hearing aids in particular, with their form factors that are conducive to all day usage, really strike me as the perfect preventative health device. The largest demographic of hearing aid wearers (75+ years old), is probably in need of a health monitoring tool like this the most too. As these sensors mature and become more capable at detecting a wider variety of risks, so too will the value proposition of these devices grow.

I don’t think it’s too far-fetched to think that in the not-too-distant future, one’s physician might actually even “prescribe” a preventative health device to monitor a pre-existing condition or some type of medical risk. I can picture them showing a list of body-worn, certified, “preventative health,” devices, and in that list would contain a range of options from the Apple Watch, to sensor equipped hearing aids, to cutting edge hearables. Look no further than the software development kits that Apple has been rolling out over the past few years, and you’ll see that biometric data logging and sharing is very much on the horizon. Exciting times indeed!

The Peace Dividends of the Smartphone War

One of the biggest byproducts of the mass proliferation of smartphones around the planet is the fact that the components inside the devices are becoming increasingly more powerful and sophisticated, while simultaneously becoming smaller and less expensive. Chris Anderson, the CEO of 3D Robotics, refers to this as the, “Peace Dividends of the Smartphone Wars,” where he says:

The peace dividend of the smartphone wars, which is to say that the components in a smartphone — the sensors, the GPS, the camera, the ARM core processors, the wireless, the memory, the battery — all that stuff, which is being driven by the incredible economies of scale and innovation machines at Apple, Google, and others, is now available for a few dollars.

Since this blog is focused on innovation occurring around ear-worn technology, let’s examine some of the different peace dividends being reaped by hearing aid and hearables manufacturers and how those look from a consumer’s standpoint.

Solving the Connectivity Dilemma

Ever since the debut of the first, “made for iPhone” hearing aid in 2013 (the Linx), each of the major hearing aid manufacturers have followed suit in the pursuit to provide seamless connectivity to the users’ smartphone. This type of connectivity was limited to iOS up until September 2016, when Sonova released it’s Audeo B hearing aid which used a different Bluetooth protocol that allowed for universal connectivity to all smartphones. To keep the momentum going, Google just announced that its Pixel and Pixel 2 smartphones will allow for pairing of any type of Bluetooth hearing aid. The hearing aids and the phones are both becoming more compatible with each other. Every year, we seem to move closer and closer to having universal connectivity among our smartphones and Bluetooth hearing aids.

While connectivity is great and opens up a ton of different new opportunities, it also creates a battery drain on the devices. This poses a challenge to the manufacturers of these super small devices because while the majority of components packed inside these devices have been shrinking in size, the one key component that doesn’t really shrink is the battery.

There are a few things that the manufacturers are doing to circumvent this roadblock based on recent developments largely due to the smartphone supply chain. The first is rechargeability-on-the-go. In the hearables space, you’ll see that pretty much every device has a companion charging case, from Airpods to IQ Buds to Bragi Dash Pro. Hearing aids, which have long been powered by disposable, zinc-air batteries (which last about 4-7 days depending on usage), are now quickly going the rechargeable route as well. Many of which can be charged in similar companion cases akin to what we’re using with hearables.

Rechargeability is a good step forward but it doesn’t really solve the issue of draining batteries quickly. So, if we can’t fit a bigger battery in such a small space and battery innovation is currently stagnant, engineers were forced to look at how we actually use the power. Enter into the equation, computer chips.

Chip’in In

I’ve written about this before, but the W1 chip that Apple debuted in 2016 was probably one of the biggest moments for the whole hearables industry. Not only did it solve the reliable pairing issue (this chip is responsible for the fast-pairing of AirPods), but it also uses “low-power” Bluetooth, ultimately providing 5 hours of listening time before you need to pop them in their charging case (15 minutes of charge = another 3 hours). With this one chip, Apple effectively removed the two largest detractors to people adopting hearables: battery life and reliable pairing.

Apple has since debuted an updated, improved W2 chip used in its Apple Watch that will likely make its way to AirPods version two. Each iteration will likely continue to increase battery time.

Not to be outdone, Qualcomm introduced its new chipset the QCC5100 at CES this January. Qualcomm’s SVP of Voice & Music, Andy Murray, stated:

“This breakthrough single-chip solution is designed to dramatically reduce power consumption and offers enhanced processing capabilities to help our customers build new life-enhancing, feature-rich devices. This will open new possibilities for extended-use hearable applications including virtual assistants, augmented hearing and enhanced listening,”

This is important because Apple tends not to license out its chips, so for third party hearable and hearing aid manufacturers, they’ll need to reap this type of innovation from a company like Qualcomm to compete with the capabilities that Apple brings to market.

The next one is actually a dividend of a dividend. Smart speakers, like Amazon’s Echo, are cheap to manufacturer due to the smartphone supply chain and as a result have driven down the price of Digital Sound Processing (DSP) chips to a fraction of what they were. These specialized chips are used to process audio (all those Alexa commands) and have long been used by hearing aid manufacturers. Similar to the W1 chip, these chips provide a low-power method that can now be utilized by hearable manufacturers. More options for third party manufacturers.

So, with major tech powerhouses sparring against each other in the innovation ring, hearing aid and hearable manufacturers are able to reap that innovation at a cheap price, ultimately resulting in better devices for the consumers at a depreciating cost.

Sensory Overload

What’s on the horizon with the innovation happening within our ear-computers is where things really start to get exciting. The most obvious example of where things are headed are with the sensors being fit in these devices. Starkey announced at its summit this year an upcoming hearing aid that will contain an inertial sensor to detect falls. How can it detect people falling down? Another dividend – the same types of gyroscopes and accelerometers that we have in our phones that work in tandem to detect the orientation of our phones. This sensor combo can also be used to track overall motion, so not only can it detect a person falling down, but it can also serve as an overall fitness monitor. These are small enough and cheap enough now to where virtually any ear-worn device manufacturer can embed these types of sensors into their devices.

Valencell, a biometric sensor manufacturer, has been paving the way with what you can do when you implement heart rate sensors into our ear-worn devices. By using a combination of the metrics recorded by these sensors, you can measure things such as core temperature, which would be great for monitoring and alerting the user of the potential risk of heat exhaustion. You can also gather much more precise fitness metrics, such as intensity levels of one’s workout.

And then there are the efforts around one day being able to non-invasively monitor glucose levels through a hearing aid or hearable. This would most likely be done via some type of biometric sensor or combination of components derived from our smartphones as well. For the 29 million people living with diabetes in America, who also might suffer from hearing loss, a gadget that provides both amplification and glucose monitoring would be much appreciated and compelling.

These types of sensors serve as tools to create new use cases around both preventative health applications, as well as use cases designed for fitness enthusiasts that go beyond what exists today.

The Multi-Function Transformation

One of the reasons that I started this blog was to try and raise awareness around the fact that the gadgets we’re wearing in our ears are on the cusp of transforming from single-function devices, whether that be for audio consumption or amplification, into multi-function devices. All of these disparate innovations make it possible for such a device to emerge without limiting factors such as terrible battery life.

This type of transformation does a number of things. First of all, I believe that it will ultimately kill the negative stigma associated with hearing aids. If we’re all wearing devices in our ears for a multitude of reasons, for increasingly longer periods of time, then who’s to know why you’re even wearing something in your ear, let alone bat an eye at you?

The other major thing I foresee this doing is continue to compound the network effects of these devices. Much like our smartphones, when there is a critical mass of users, there tends to be a virtuous cycle of value creation spearheaded by developers, meaning there’s more and more you can do with these devices. No one could have predicted what the smartphone app economy would look like here in 2018, back in 2008. We’re currently in that same type of starting period with our ear-computers, where the doors are opening to developers to create all the new functionality. Smart assistants alone represent a massive wave of potential new functionality that I’ve written about extensively, and as of January 2018, hearable and hearing aid manufactures can easily integrate Alexa into their devices, thanks to Amazon’s Mobile Accessory Kit.

It’s hard to foresee what all we’ll use these devices for, but the ability for something akin to the app economy to foster and flourish is now enabled due to so many of these recent developments birthed by the smartphone supply chain. Challenges still remain for those producing our little ear-computers, but the fact of the matter is that the components housed inside these small gadgets are simultaneously getting cheaper, smaller, more durable and more sophisticated. There will be winners and losers as this evolves, but one winner that is obvious is the consumer.

Outside Disruption

There have been a number of recent developments that involve impending moves from non-healthcare companies intending to venture into the healthcare space in some capacity. First, there was the joint announcement from Berkshire Hathaway, JP Morgan and Amazon that they intend to team up to “disrupt healthcare” by creating an independent healthcare company specifically for their collective employees. You have to take notice anytime you have three companies of that magnitude, led by Buffett, Bezos and Dimon, announcing an upcoming joint venture.

Not to be outdone, Apple released a very similar announcement last week stating that, “Apple is launching medical clinics to deliver the world’s best health care experience to its employees.” The new venture, AC Wellness, will start as two clinics near the new “spaceship” corporate office (the one where Apple employees keep walking into the glass walls). Here’s an example of what one of the AC Wellness job postings look like:

So in a matter of weeks, we have Amazon, Berkshire Hathaway, JP Morgan and now Apple, publicly announcing that they plan to create distinct healthcare offerings for their employees. I don’t know what the three-headed joint venture will ultimately look like, or if either of these two ventures will extend beyond their employees, but I think that there is a trail of crumbs to follow to try and discern what Apple might ultimately be aspiring for.

Using the Past to Predict the Future

If you go back and look at the timeline of some of Apple’s moves over the past four years, this potential move into healthcare seems less and less surprising. Let’s take a look at some of the software and hardware developments over the past few years, and how they might factor into Apple’s healthcare play:

The Software Developer Kits – The Roads and Repositories

The first major revelation that Apple might be planning something around healthcare was the introduction of the software development kit (SDK), HealthKit, back in 2014. HealthKit allows for third-party developers to gather data from various apps on users’ iPhones and then feed that health-based data into Apple’s Health app (a pre-loaded app that comes standard on all iPhones running iOS 8 and above). For example, if you use a third-party fitness app (i.e. Nike + Run) developers could feed data from said third-party app into Apple’s Health app, so that the user can see all of the data gathered in that app alongside any other health-related data that was gathered. In other words, Apple leveraged third party developers to make their Health app more robust.

When HealthKit debuted in 2014, it was a bit of a head-scratcher because the type of biometric data you can gather from your phone is very limited and non-accurate. Then Apple introduced its first wearable, the Apple Watch in 2015, and suddenly HealthKit made a lot more sense as the Apple Watch represented a much more accurate data collector. If your phone is in your pocket all day, you might be able to get a decent pedometer reading around how many steps you’ve taken, but if you’re wearing an Apple Watch, you’ll record much more precise and actionable data, such as your blood pressure and heart rate.

Apple followed up on this a year later with the introduction of a second SDK, ResearchKit. ResearchKit allowed for Apple users to opt into sharing their data with researchers for studies being conducted, providing a massive influx of new participants and data which in turn could yield more comprehensive research. For example, researchers studying asthma developed an app to help track Apple users suffering from asthma. 7,600 people enrolled through the app in a six-month program, which consisted of surveys around how they treated their asthma. Where things got really interesting was when researchers started looking at ancillary data from the devices, such as geo-location of each user, to identify any possible neighboring data such as the pollen and heat index to identify any correlations.

Then in 2016, Apple introduced a third SDK called CareKit. This new kit served as an extension to HealthKit that allowed developers to build medically focused apps that track and manage medical care. The framework provides distinct modules for developers to build off of around common features a patient would use to “care” for their health. For example, reminders around medication cadences, or objective measurements taken from the device, such as blood pressure readouts. Additionally, CareKit provides easy templates for sharing of data (i.e. primary care physician), which is what’s really important to note.

These SDK Kits served as tools to create roads and houses to transfer and store data. In the span of a few years, Apple has turned its Health app into a very robust data repository, while incrementally making it easier to deposit, consolidate, access, build-upon, and share health-specific data.

Apple’s Wearable Business – The Data Collectors

Along with the Apple Watch in 2015 and AirPods in 2016, Apple introduced a brand new, proprietary, wearable-specific computer chip used to power these devices called the W1 chip. For anyone that has used AirPods, the W1 chip is responsible for the automatic, super-fast pairing to your phone. The first two series of the Apple Watch and the current, first generation AirPods use the W1 chip, while the Apple Watch series 3 now uses an upgraded W2 chip. Apple claims that the W2 chip is 50% more power efficient and boosts speeds up to 85%.

W1 Chip via The Verge

Due to the size constraints of something as small as AirPods, chip improvements are crucial to the devices becoming more capable as it allows for engineers to allocate more space and power for other things, such as biometric sensors. In an article from Steve Taranovich from Planet Analog, Dr. Steven LeBoeuf, the president of biometric sensor manufacturer Valencell said, “the ear is the best place on the human body to measure all that is important because of its unique vascular structure to detect heart rate (HR) and respiration rate. Also, the tympanic membrane radiates body heat so that we are able to get accurate body temperature here.”

Renderings of AirPods with biometric sensors included

Apple seems to know this too, as they filed three patents (1, 2 and 3) in 2015 around adding biometric sensors to AirPods. If Apple can fit biometric sensors onto AirPods, then it’s feasible to think hearing aids can support biometric sensors as well. There are indicators that this is already becoming a reality, as Starkey announced an inertial sensor that will be embedded in its next line of hearing aids to detect falls. While the main method of logging biometric data currently resides with wearables, it’s very possible that our hearables will soon serve that role as they’re the optimal spot on the body to do such. A brand new use case for our ever-maturing ear computers.

AC Wellness & Nurse Siri

The timing for these AC Wellness clinics makes sense. Apple has had four years to build out the data-level aspect to their offering via the SDKs. They’ve made it both easy to access and share data between apps, while simultaneously making their own Health app more robust. At the same time, they now sell the most popular wearable and hearable, effectively owning the biometric data collection market. The Apple Watch is already beginning to yield the types of results we can expect when this all gets combined:

To add more fuel to the fire, here’s how the AC Wellness about page reads:

“Enabled by technology” sure seems to indicate that these clinics will draw heavily from all the groundwork that’s been laid. It’s possible that patients would log their data via the Apple Watch (and down the line maybe AirPods/MFi hearing aids) and then transfer said data to their doctor. The preventative health opportunities around this type of combination are staggering. Monitoring glucose levels for diabetes. EKG monitoring. Medication management for patients with depression. These are just scratching the surface of how these tools can be leveraged in conjunction. When you start looking at Apple’s wearable devices as biometric data recorders and you consider the software kits that Apple is enabling developers with, Apple’s potential venture into healthcare begins making sense.

The last piece of the puzzle, to me, is Siri. What patients really now need, with all of these other pieces in place, is for someone (or thing) to understand the data they’re looking at. The pulmonary embolism example above assumes that all users will be able to catch that irregularity. The more effective way would be to enlist an AI (Siri) to parse through your data, alert you to what you need to be alerted to, and coordinate with the appropriate doctor’s office to schedule time with a doctor. You’d then show up to the doctor, who can review the biometric data Siri sent over. If Apple were to give Siri her due and dedicate significant resources, she could be the catalyst to making this all work. That to me, would be truly disruptive.

The annual Consumer Electronics Show (CES) took place this past week in Las Vegas, bringing together 184,000 attendees and a whole host of vendors in the consumer electronics space to showcase all of the new, innovative things each is working on. Once again, smart assistants stole the show, making this the third year in row where smart assistants seem to be gradually dominating the overall theme of the show. Along with the Alexa-fication of everything, there were a number of significant hearable announcements, each in some way or another incrementally improving and expanding on our mini ear-computers. Although I was not in attendance, these are my five takeaways from CES 2018:

1. The Alexa-fication of Everything

It seemed like just about every story coming out of this year’s show was in some way tied to an Alexa (or Google…but mainly Alexa) integration. We saw Kohler introduce the “connected bathroom” complete with a line of smart, Alexa-enabled mirrors, showers, toilets (yes, toilets), bathtubs and faucets. First Alert debuted its new Onelink Safe & Sound carbon monoxide and smoke detector with Alexa built-in. Harman revealed an Echo Show competitor, the JBL LINK View, powered by Google’s assistant.

My personal favorite of the smart-assistant integrations around the home, was the inconspicuous smart light switch, the Instinct, by iDevices. By converting your standard light switches around your home to the Instinct, you enhance the utility of the switch by an order of great magnitude, as it allows for motion-detection lighting, energy savings, and all the benefits of Alexa built-right into your walls.

And that’s just the integrations that emerged for the home, as the car became another area of focus of smart assistant integration at this year’s show. Toyota announced that it would be adding Alexa to a number of its Toyota and Lexus cars, starting this year. Kia partnered with Google Assistant to begin rolling that feature out this year too. Add these integrations to the list that also includes Ford, BMW and Nissan from previous announcements. Mercedes decided it doesn’t need Google or Amazon, and unveiled its own assistant. And finally, Anker debuted a bluetooth smart charger, Roav Viva, that can access Alexa in whatever car you’re in for only $50.

Alexa, Google and the other smart assistants are showing no sign of slowing down in their quest to enter every area that we exist.

2. Bragi Announces “Project Ears”

Bragi’s “Project Ears” is a combination of tinnitus relief and personalized hearing enhancement. This announcement was exciting for two reasons.

What’s particularly interesting about Bragi is its partnership with “Big 6” hearing aid manufacturer Starkey, and the byproducts of that partnership that we’re beginning to see. Last week, I wrote about Starkey’s announcement of the “world’s first hearing aid with inertial sensors” and how that was likely a byproduct of the Bragi partnership, as Bragi has been on the forefront of embedding sensors into small, ear-worn devices. Fast-forward one week to CES, and we see Bragi’s Project Ears initiative, which includes “tinnitus relief” by embedding tinnitus masking into the device to help relieve the ringing in one’s ears. So, we see Bragi incorporating elements of hearing aids into their devices, just as we saw Starkey incorporating elements of hearable technology into their hearing aids. The two seem to be leveraging each others’ expertise to further differentiate in each’s respective markets.

The second aspect to this announcement, stems from Bragi’s newly announced partnership with Mimi Hearing Technologies. Mimi specializes in “personalized hearing and sound personalization” and as a result, Bragi’s app will include a “scientific hearing test to measure your unique Earprint™.” This is ultimately to say that the hearing test issued by Bragi’s app will be iterated and improved via this partnership with Mimi. Bragi wants to match you as accurately as possible to your own hearing profile, and this announcement shows that they’re continuing to make progress in doing so.

3. Nuheara Unveils New Products & Utilization of NAL-NL2

Nuheara, the hearable start up from down-under, introduced two new products at this year’s show. The first was the LiveIQ, a pair of wireless earbuds that are priced under $200. These earbuds will use some of the same technology that Nuheara’s flagship hearable, IQBuds, use, as well as providing active noise cancelling.

The second device introduced was the IQBuds Boost, which will essentially serve as an upgrade to the current IQBuds. The IQBuds Boost will use what Nuheara has dubbed “EarID™” which will provide for a more “personalized experience unique to the user’s sound profile.” Sounds familiar, right? Bragi’s “Earprint™” technology and Nuheara’s “EarID™” both aim to serve as a way in which the user can further personalize their experience via each company’s companion app.

In addition to the new product announcements, Nuheara also announced a partnership with the National Acoustic Lab (NAL), “to license its international, industry-recognized NAL-NL2 prescription procedure, becoming the only hearable company globally to do this.”

Here’s what Oaktree Product’s in-house PhD audiologist, AU Bankaitis, had to say about the significance of this announcement:

“Kudos to NuHeara for partnering with the National Acoustic Lab (NAL), the research arm of a leading rehabilitation research facility that developed the NAL-NL2 prescriptive formula commonly applied to hearing instruments. It will be interesting to see how this partnership will influence future IQBud upgrades. Whether or not this approach will result in a competitive advantage to other hearables remains to be seen. Research has clearly shown that relying on a fitting algorithm without applying objective verification with probe-mic measurements often times results in missing desired targets for inputs and frequencies most critical for speech. “

4. Qualcomm Introduces New Chipset for Hearables

Some of the most exciting innovation happening in the whole wearable market, and in particular the hearable sub-market, is taking place under the hood of the devices. Qualcomm’s new chipset, the QCC5100, is a good representation of the innovation occurring inside the devices, as these chips will reduce power consumption by 65%, allowing for increased battery life. Per Qualcomm’s SVP of Voice & Music, Andy Murray:

“This breakthrough single-chip solution is designed to dramatically reduce power consumption and offers enhanced processing capabilities to help our customers build new life-enhancing, feature-rich devices. This will open new possibilities for extended-use hearable applications including virtual assistants, augmented hearing and enhanced listening,”

It’s wild to think that it was only back in 2016 (pre-AirPods), when battery life and connectivity stood as major barriers of entry for hearable technology. AirPods’ W1 chip dramatically improved both, and now we see other chip makers rolling out incremental improvements, further reducing those initial roadblocks.

5. Oticon wins Innovation Award for its Hearing Fitness App

Oticon’s upcoming “hearing fitness app” that will be used in conjunction with Oticon’s Opn hearing aids illustrates the potential for this new generation of hearing aids that are able to harness the power of user data. The app gathers data from your hearing aid usage, to allow the user to view their data in an app that looks somewhat similar to fitbit’s slick data readouts. The app will display the user’s hearing aid usage, which can then be used to further enhance the user’s experience based on the listening environments the user is experiencing. So, not only will this empower users, but this will also serve as a great tool for Audiologists to further customize the device for their patient using real data.

Oticon’s Hearing Fitness App wins CES 2018 Innovation Award

Furthermore, this app can integrate other data from other wearable devices, so that all of the data is housed together in one app. It’s important to look at this as another step toward bringing to fruition the idea that hearing aids are undergoing a makeover into multi-function devices, including “biometric data harvesting” to provide actionable insight into one’s data. For example, if my hearing aids are recording my biometric data, and my app notifies me that my heart rate is acting funky or my vitals are going sideways, I can send that data to my doctor and see what she recommends. That’s what this type of app ultimately could be, beyond measuring one’s “hearing fitness.”

What were your favorite takeaways from this year’s show? Feel free to comment or share on twitter!

I will be traveling to the Alexa Conference this week in Chattanooga, Tennessee and will surely walk away from there with a number of exciting takeaways from #VoiceFirst land, so be sure to check back in for another rundown next week.

Editor’s Note: In my initial post, I mentioned that along with the long-form assessments I’ve been publishing, I’d also be doing short, topical updates. This is the first of those updates.

In the first week of 2018, we saw a handful of significant updates that pertain to various trends converging around ears. Here’s a rundown of what you need to know:

Amazon introduces the Amazon Mobile Accessory Kit (AMAK)

As Voicebot.ai reported from an Amazon blog post, Amazon’s new Mobile Accessory Kit will allow for much easier (and cheaper) Alexa integration into OEM manufacturer’s devices, such as hearables. It’s been possible in the past to integrate Alexa into third party devices, but this kit will serve as a much more simplified process to convert any type of hardware into Alexa-integrated hardware. This is great news for this new use case, as it will surely put Alexa in more and more of our ear-worn devices.

Per Amazon’s senior product manager, Gagan Luthara:

“With the Alexa Mobile Accessory Kit, OEM development teams no longer need to perform the bulk of the coding for their Alexa integration. Bluetooth audio-capable devices built with this new kit can connect directly to the Alexa Voice Service (AVS) via the Amazon Alexa App (for Android and iOS) on the customer’s mobile device.”

Starkey Announces Exciting Additions to Next Generation Hearing Aids

There were a number of exciting revelations at Starkey’s Biennial Expo, but among all the announcements, there were two that really intrigued me. The first was the inclusion of “fall detection” sensors in Starkey’s next generation of hearing aids. This will be the first hearing aid with inertial sensors:

On the surface, this is really great, as every 11 seconds an older adult is treated in the emergency room for a serious fall. The purpose of these sensor is to detect those type of falls, so that the user can get immediate help. What’s even more intriguing is the fact that we’re now beginning to see advanced sensors being built into this new wave of hearing aids. As I will write about soon, the preventative health benefits combined with smart assistants, offer some very exciting possibilities and another promising use case for our ear-worn devices.

The second announcement, was the upcoming live-language translation feature to be added to this same, next generation of Starkey hearing aids. This stems from Starkey’s partnership with hearable manufacturer, Bragi, which has this feature available with its Bragi Dash Pro. The live-language translation is not Bragi’s proprietary software, as Bragi currently uses the third party application, iTranslate to power this feature for its device. Although it has not been announced formerly, I expect that Starkey’s live-language translation feature will also be powered by iTranslate. Expect more features like this to become more widespread across our connected devices over time as more manufacturers support this type of integration.

As we move into week two of 2018, expect another wave of exciting announcements coming out of CES. Check back here next week as I will be doing a rundown of the most important takeaways coming out of Vegas this week.