Sign up to receive free email alerts when patent applications with chosen keywords are publishedSIGN UP

Abstract:

Techniques are disclosed for classifying a sound environment for hearing
assistance devices using redundant estimates of an acoustical environment
from two hearing assistance devices and accessory devices. In one
example, a method for operating a hearing assistance device includes
sensing an environmental sound, determining a first classification of the
environmental sound, receiving at least one second classification of the
environmental sound, comparing the determined first classification and
the at least one received second classification, and selecting an
operational classification for the hearing assistance device based upon
the comparison.

Claims:

1. A method for operating a hearing assistance device, the method
comprising: sensing an environmental sound; determining a first
classification of the environmental sound; receiving at least one second
classification of the environmental sound; comparing the determined first
classification and the at least one received second classification; and
selecting an operational classification for the hearing assistance device
based upon the comparison.

2. The method of claim 1, comprising: when the determined first
classification is the same as the at least one received second
classification, selecting an operational classification to be the
determined first classification.

3. The method of claim 1, further comprising: determining a first sound
classification uncertainty value of the environmental sound; receiving at
least one second sound classification uncertainty value of the
environmental sound; when the determined first classification is not the
same as the at least one received second classification, comparing the
determined first sound classification uncertainty value and the at least
one second sound classification uncertainty value; and selecting an
operational classification based on the lowest of the compared
uncertainty values.

4. The method of claim 1, further comprising applying parameter settings
for the hearing assistance device appropriate for the selected
operational classification.

5. The method of claim 3, further comprising applying parameter settings
for the hearing assistance device appropriate for the selected
operational classification.

6. The method of claim 1, wherein sensing an environmental sound includes
using a microphone.

7. The method of claim 1, wherein receiving at least one second
classification of the environmental sound includes receiving the at least
one second classification from a second hearing assistance device.

8. A system comprising: a first hearing assistance device comprising: a
microphone configured to sense an environmental sound; a transceiver
configured to receive at least one second classification of the
environmental sound; and a processor including: a classification module
configured to determine a first classification of the sensed
environmental sound; and a consensus determination module configured to
compare the determined first classification and the at least one received
second classification, and, when the determined classification is the
same as the at least one received second classification, to select an
operational classification for the hearing assistance device based upon
the comparison.

9. The system of claim 8, further comprising: a second hearing assistance
device, comprising: a device microphone configured to sense the
environmental sound; a device processor including a device classification
module configured to determine a second classification of the sensed
environmental sound; and a transceiver configured to send the second
classification of the environmental sound to the first hearing assistance
device.

10. The system of claim 9, wherein the second hearing assistance device
further comprises a device consensus determination module.

11. The system of claim 8, further comprising: an on-the-body device,
comprising: a device microphone configured to sense the environmental
sound; a device processor including a device classification module
configured to determine a second classification of the sensed
environmental sound; and a transceiver configured to send the second
classification of the environmental sound to the first hearing assistance
device.

12. The system of claim 8, further comprising: an off-the-body device,
comprising: a device microphone configured to sense the environmental
sound; a device processor including a device classification module
configured to determine a second classification of the sensed
environmental sound; and a transceiver configured to send the second
classification of the environmental sound to the first hearing assistance
device.

13. The system of claim 12, wherein the off-the-body device includes a
mobile phone.

14. The system of claim 8, wherein the first hearing assistance device
includes a hearing aid.

15. The system of claim 14, wherein the hearing aid includes an
in-the-ear (ITE) hearing aid.

16. The system of claim 14, wherein the hearing aid includes a
behind-the-ear (BTE) hearing aid.

17. The system of claim 14, wherein the hearing aid includes an
in-the-canal (ITC) hearing aid.

18. The system of claim 14, wherein the hearing aid includes a
receiver-in-canal (RIC) hearing aid.

19. The system of claim 14, wherein the hearing aid includes a
completely-in-the-canal (CIC) hearing aid.

20. The system of claim 14, wherein the hearing aid includes a
receiver-in-the-ear (RITE) hearing aid.

[0002] Hearing aid users are typically exposed to a variety of sound
environments, such as speech, music, or noisy environment. Various
techniques are known and used to classify a user's sound environment,
e.g., the Baynesian classifier, the Hidden Markov Model (HMM), and
Gaussian Mixture Model (GMM). Based on the classified sound environment,
the hearing assistance device can apply parameter settings appropriate
for the sound environment to improve a user's listening experience.

[0003] Each of the known sound environment classification techniques,
however, has less than 100% accuracy. As a result, the user's sound
environment can be misclassified. This misclassification can result in
parameter settings for the hearing assistance device that may not be
optimal for the user's sound environment.

[0004] Accordingly, there is a need in the art for improved sound
environment classification for hearing assistance devices.

SUMMARY

[0005] In general, this disclosure describes techniques for classifying a
sound environment for hearing assistance devices using redundant
estimates of an acoustical environment from two hearing assistance
devices, e.g., left and right, accessory devices, and an on-the-body
device, e.g., a microphone with a wireless transmitter, and/or an
off-the-body device, e.g., a mobile communication device, such as a
mobile phone or a microphone accessory, facilitated by a communication
link, e.g., wireless, between the hearing assistance devices and the
on-the-body device and/or the off-the-body device. Using various
techniques of this disclosure, each device can determine a classification
uncertainty value, which can be compared, e.g., using an error matrix and
error distribution, in order to determine a consensus for environmental
classification.

[0006] In one example, this disclosure is directed to a method of
operating a hearing assistance device that includes sensing an
environmental sound, determining a first classification of the
environmental sound, receiving at least one second classification of the
environmental sound, comparing the determined first classification and
the at least one received second classification, and selecting an
operational classification for the hearing assistance device based upon
the comparison.

[0007] In another example, this disclosure is directed to a system that
includes a first hearing assistance device that includes a microphone, a
transceiver and a processor. The microphone is configured to sense an
environmental sound and the transceiver is configured to receive at least
one second classification of the environmental sound. The processor
includes a classification module configured to determine a first
classification of the sensed environmental sound, and a consensus
determination module configured to compare the determined first
classification and the at least one received second classification, and,
when the determined classification is the same as the at least one
received second classification, to select an operational classification
for the hearing assistance device based upon the comparison. However, if,
upon comparison, the received sound classification and the determined
sound classification do not agree with one another, a binaural consensus
between the two hearing assistance devices has not been reached and, in
accordance with this disclosure, additional steps can be taken to resolve
the disagreement.

[0008] This Summary is an overview of some of the teachings of the present
application and not intended to be an exclusive or exhaustive treatment
of the present subject matter. Further details about the present subject
matter are found in the detailed description and appended claims. Other
aspects will be apparent to persons skilled in the art upon reading and
understanding the following detailed description and viewing the drawings
that form a part thereof, each of which are not to be taken in a limiting
sense. The scope of the present invention is defined by the appended
claims and their legal equivalents.

BRIEF DESCRIPTION OF DRAWINGS

[0009] FIG. 1 is a block diagram of a hearing assistance device, according
to one embodiment of this disclosure.

[0010] FIG. 2 is a block diagram illustrating an embodiment of a processor
in a hearing assistance device that can be used to implement various
techniques of this disclosure.

[0011] FIG. 3 is a block diagram illustrating an embodiment of a device
that can be used to implement various techniques of this disclosure.

[0012] FIGS. 4A and 4B are example configurations that can be used to
implement various embodiments of this disclosure.

[0013] FIG. 5 is a flow diagram illustrating an embodiment of a method for
selecting a classification of a sound environment of a hearing assistance
device in accordance with this disclosure.

DETAILED DESCRIPTION

[0014] The following detailed description of the present subject matter
refers to subject matter in the accompanying drawings which show, by way
of illustration, specific aspects and examples in which the present
subject matter may be practiced. These examples are described in
sufficient detail to enable those skilled in the art to practice the
present subject matter. References to "an", "one", or "various" examples
in this disclosure are not necessarily to the same example, and such
references contemplate more than one example. The following detailed
description is demonstrative and not to be taken in a limiting sense. The
scope of the present subject matter is defined by the appended claims,
along with the full scope of legal equivalents to which such claims are
entitled.

[0015] The present detailed description will discuss hearing assistance
devices using the example of hearing aids. Hearing aids are only one type
of hearing assistance device. Other hearing assistance devices include,
but are not limited to, those in this document. Hearing assistance
devices include, but are not limited, ear level devices that provide
hearing benefit. One example is a device for treating tinnitus. Another
example is an ear protection device. Possible examples include devices
that can combine one or more of the functions/examples provided herein.
It is understood that their use in the description is intended to
demonstrate the present subject matter, but not in a limited or exclusive
or exhaustive sense.

[0016] FIG. 1 shows a block diagram of an example of a hearing assistance
device in accordance with this disclosure. In one example, hearing
assistance device 100 is a hearing aid. In one example, mic 1 102 is an
omnidirectional microphone connected to amplifier 104 that provides
signals to analog-to-digital converter 106 ("A/D converter"). The sampled
signals are sent to processor 120 that processes the digital samples and
provides them to amplifier 140. The amplified digital signals are then
converted to analog by the digital-to-analog converter 142 ("D/A
converter"). The receiver 150 (also known as a speaker) can demodulate
and play a digital signal directly, or it can play analog audio signals
received from the D/A converter 142. In various embodiments, the digital
signal is amplified and a pulse-density modulated signal is sent to the
receiver, which demodulates it, thereby extracting the analog signal.
Although FIG. 1 shows D/A converter 142 and amplifier 140 and receiver
150, it is understood that other outputs of the digital information may
be provided. For instance, in one example implementation, the digital
data is sent to another device configured to receive it. For example, the
data may be sent as streaming packets to another device that is
compatible with packetized communications. In one example, the digital
output is transmitted via digital radio transmissions. In one example,
the digital radio transmissions are packetized and adapted to be
compatible with a standard. Thus, the present subject matter is
demonstrated, but not intended to be limited, by the arrangement of FIG.
1.

[0017] In one example, mic 2 103 is a directional microphone connected to
amplifier 105 that provides signals to analog-to-digital converter 107
("A/D converter"). The samples from A/D converter 107 are received by
processor 120 for processing. In one example, mic 2 103 is another
omnidirectional microphone. In such examples, directionality is
controllable via phasing mic 1 and mic 2. In one example, mic 1 is a
directional microphone with an omnidirectional setting. In one example,
the gain on mic 2 is reduced so that the system 100 is effectively a
single microphone system. In one example, (not shown) system 100 only has
one microphone. Other variations are possible that are within the
principles set forth herein.

[0018] Hearing assistance device 100 can further include transceiver 160
that includes circuitry configured to wirelessly transmit and receive
information. Transceiver 160 can establish a wireless communication link
and transmit or receive information from another hearing assistance
device 100 and/or from an on-the-body device and/or an off-the-body
device, e.g., a mobile communication device, such as a mobile phone or a
microphone accessory.

[0019] In accordance with various techniques of this disclosure and as
described in more detail below, processor 120 includes modules for
execution that can classify a sound environment and determine an
environmental classification uncertainty value, which can be compared,
e.g., using an error matrix and error distribution, to a received
environmental classification uncertainty value from another hearing
assistance device 100 and/or from an on-the-body device and/or an
off-the-body device in order to determine a consensus for environmental
classification between left and right hearing assistance devices and/or
from an on-the-body device and/or an off-the-body device. An example of
an on-the-body device includes a microphone on-the-body connected to a
one-way wireless transmitter for communicating ambient sound environment
to the hearing assistance device(s).

[0020] FIG. 2 is a block diagram illustrating an example of a processor
that can be used to implement various techniques of this disclosure. In
particular, FIG. 2 depicts processor 120 of FIG. 1 including two modules,
namely sound classification module 162 and consensus determination module
164, that can be used to for classifying a sound environment. Sound
classification module 162 can extract a set of features from the signals
received by mic 1 102 and/or mic 2 103 (both of FIG. 1) to classify the
sound environment of hearing assistance device 100. In some examples, the
feature sets can overlap.

[0021] In one example, sound classification module 162 uses a two-stage
environment classification scheme. The signals mic 1 102 and/or mic 2 103
can be first classified as music, speech or non-speech. The non-speech
sounds can be further characterized as machine noise, wind noise or other
sounds. At each stage, the classification performance and the associated
computational cost are evaluated along three dimensions: the choice of
classifiers, the choice of feature sets and number of features within
each feature set.

[0022] Choosing appropriate features to be implemented in the sound
classification module may be a domain-specific question. The sound
classification module 162 can include one of two feature groups,
specifically a low level feature set, and Mel-scale Frequency cepstral
coefficients (MFCC). The former can include both temporal and spectral
features, such as zero crossing rate, short time energy, spectral
centroid, spectral bandwidth, spectral roll-off, spectral flux, high/low
energy ratio, etc. The logarithms of these features can be included in
the set as well. The first 12 coefficients can be included in the MFCC
set. Other features can include cepstral modulation ratio and several
psychoacoustic features.

[0023] Within each set, some features may be redundant or noisy or simply
have weak discriminative capability. To identify optimal features, a
forward sequential feature selection algorithm can be employed.
Additional information regarding an example of a sound classification
technique is described in U.S. patent application Ser. No. 12/879,218,
titled "SOUND CLASSIFICATION SYSTEM FOR HEARING AIDS," by Juanjuan Xiang
et al., and filed on Sep. 10, 2010, the entire contents of which being
incorporated herein by reference.

[0024] In some examples, upon determining a sound classification of the
received signal(s), sound classification module 162 of processor 120 can
further determine a sound classification uncertainty value. In one
example, an error matrix and error distributions can be measured, e.g.,
during training of a hearing assistance devices, and stored in a memory
device (not depicted) in hearing assistance device 100. Following sound
classification, sound classification module 162 can calculate a sound
classification uncertainty value by comparing the actual results of the
sound classification to the error matrix and error distributions stored
on the memory device.

[0025] According to various embodiments, upon determining the sound
classification uncertainty value, processor 120 can control transceiver
160 to transmit the determined sound classification to another hearing
assistance device 100. For example, processor 120 can control transceiver
160 of a first hearing assistance device 100, e.g., a hearing aid for a
left ear, to transmit a sound classification determined by classification
module 162 to a second hearing assistance device 100, e.g., a hearing aid
of a right ear. Similarly, processor 120 of the second hearing assistance
device 100 can its control transceiver 160 to transmit a sound
classification determined by its classification module 162 to the first
hearing assistance device 100, in various embodiments. In this manner,
both first and second hearing assistance devices, e.g., left and right
hearing aids, determine and exchange sound classifications.

[0026] Upon receiving a sound classification transmitted by the first
hearing assistance device 100, transceiver 160 of the second hearing
assistance device 100 outputs a signal representative of the sound
classification to processor 120. Processor 120 and, in particular,
consensus determination module 164 of the second hearing assistance
device, can execute instructions that compare the received sound
classification from the first hearing assistance device 100 to its own
determined sound classification.

[0027] Similarly, upon receiving a sound classification transmitted by the
second hearing assistance device 100, transceiver 160 of the first
hearing assistance device 100 outputs a signal representative of the
sound classification to processor 120. Processor 120 and, in particular,
consensus determination module 164 of the first hearing assistance
device, can execute instructions that compare the received sound
classification from the second hearing assistance device 100 to its own
determined sound classification. In this manner and in accordance with
this disclosure, a binaural consensus between the two hearing assistance
devices can be used in order to select an environmental classification of
the sound environment.

[0028] If, upon comparison, consensus determination module 164 of either
the first hearing assistance device or the second hearing assistance
device determines that the received sound classification and the
determined sound classification agree with one another, a binaural
consensus between the two hearing assistance devices has been reached, in
various embodiments. As such, each processor 120 of the respective
hearing assistance device can apply parameter settings appropriate for
the classified sound environment to improve the user's listening
experience.

[0029] However, if, upon comparison, consensus determination module 164 of
either the first hearing assistance device or the second hearing
assistance device determines that the received sound classification and
the determined sound classification do not agree with one another, a
binaural consensus between the two hearing assistance devices has not
been reached and, in accordance with this disclosure, additional steps
can be taken to resolve the disagreement. In one example implementation,
consensus determination module 164 of either the first hearing assistance
device or the second hearing assistance device can compare determined
sound classification uncertainty values. Like the sound classifications,
each hearing assistance device 100 can transmit and receive determined
sound classification uncertainty values. In some examples, processor 120
can transmit a determined sound classification uncertainty value along
with the transmission of the determined sound classification. In other
examples, processor 120 can transmit a determined sound classification
uncertainty value upon consensus determination module 164 determining
that a discrepancy exists following a comparison between a received sound
classification and a determined sound classification.

[0030] Consensus determination module 164 of the first hearing assistance
device 100 can receive the sound classification uncertainty value
determined by the second hearing assistance device 100. Then, consensus
determination module 164 of the first hearing assistance device 100 can
compare the two sound classification uncertainty values and select the
sound classification having the lower uncertainty value. Similarly,
consensus determination module 164 of the second hearing assistance
device 100 can receive the sound classification uncertainty value
determined by the first hearing assistance device 100. Then, consensus
determination module 164 of the second hearing assistance device 100 can
compare the two sound classification uncertainty values and select the
sound classification having the lower uncertainty value, in various
embodiments.

[0031] In some example implementations, one of the first hearing
assistance device and the second hearing assistance device can act as a
master device in determining the sound classification. That is, rather
than both the first hearing assistance device and the second hearing
assistance device comparing sound classification uncertainty values, only
one of the two hearing assistance devices compares sound classification
uncertainty values to make a final decision regarding sound
classification. In such an implementation, the master device, can
transmit the final sound classification determination to the other
device, e.g., another hearing assistance device, an on-the-body sensor,
and/or an off-the-body sensor.

[0032] In accordance with this disclosure, an on-the-body device and/or an
off-the-body device, e.g., a mobile communication device, such as a
mobile phone or a microphone accessory, can also be used to classify the
sound environment, as described in more detail below with respect to FIG.
3. Additional separate sets of overlapping features can be used by the
on-the-body or off-the-body device to classify the sound environment.
Using multiple devices to classify the sound environment can allow more
features to be used in the classification, thereby improving the accuracy
of the classification.

[0033] FIG. 3 is a block diagram illustrating an example of a device that
can be used to implement various techniques of this disclosure. In FIG.
3, device 200 can be an on-the-body device or an off-the-body device,
e.g., a mobile communication device, such as a mobile phone or a
microphone accessory. In various embodiments, device 200 includes an
omnidirectional or directional microphone system, amplifier, A/D
converter and wireless transmitter with processor 208 in the hearing
devices. Device 200 can include a microphone 202, e.g., an
omnidirectional microphone, and an amplifier 204 that provides signals to
analog-to-digital converter 206 ("A/D converter"). The sampled signals
are sent to processor 208 that processes the digital samples. According
to various embodiments, processor 208 includes two modules, namely sound
classification module 210 and consensus determination module 212, that
can be used to for classifying a sound environment. Sound classification
module 210 and consensus determination module 212 are similar to sound
classification module 162 and consensus determination module 164 of FIG.
2 and, for purposes of conciseness, will not be described in detail
again. Upon receiving a signal 214 via microphone 202, device 200 and, in
particular, sound classification module 210 and consensus determination
module 212 of processor 208, can determine a sound classification and a
sound classification uncertainty value in a manner similar to that
described above with respect to processor 120 of FIG. 2, which, for
purposes of conciseness, will not be described in detail again. In one
embodiment, the final sound classification can also be determined in the
on- or off-body device, e.g. cell phone, having a two-way transceiver to
receive classification and uncertainty data from hearing assistance
devices and/or other on- or off-the-body devices.

[0034] According to various embodiments, device 200 further includes
transceiver 214 that includes circuitry configured to wirelessly transmit
and receive information. Transceiver 214 can establish a wireless
communication link and transmit or receive information to one or more
hearing assistance devices 100 and/or an on-the-body device or an
off-the-body device. In particular, transceiver 214 can transmit to at
least one device, e.g., one or more hearing assistance devices 100, a
determined sound classification and a determined sound classification
uncertainty value that can be used to form a final decision of the sound
environment.

[0035] FIGS. 4A and 4B are example configurations that can be used to
implement various techniques of this disclosure. In particular, FIG. 4A
depicts a first hearing assistance device 300, a second hearing device
302, and an on-the-body device 304 in wireless communication with each
other and configured to classify a sound environment by consensus. FIG.
4B depicts a first hearing assistance device 306, a second hearing device
308, and an off-the-body device 310 in wireless communication with each
other and configured to classify a sound environment by consensus.

[0036] Referring to FIG. 4A and by way of specific example, first hearing
assistance device 300 can receive a sound classification determined by
second hearing assistance device 302 and another sound classification
determined by at least one other device, e.g., on-the-body 304.
On-the-body device 304, e.g., a microphone with a wireless transmitter,
can be attached to a shirt of a person 305, for example. An example of
on-the-body device 304 was described above with respect to device 200 of
FIG. 3 and, for purposes of conciseness, will not be described in detail
again. Using the techniques described above, consensus determination
module 164 of the first hearing assistance device 300 can compare the
received sound classifications from the second hearing assistance device
302 and one or more devices 304.

[0037] If, upon comparison, consensus determination module 164 of the
first hearing assistance device 300 determines that the received sound
classifications and its determined sound classification agree with each
another, a consensus between the two hearing assistance devices 300, 302
and the other device 304 has been reached. As such, each processor 120 of
the respective hearing assistance device 300, 302 can apply parameter
settings appropriate for the classified sound environment to improve the
user's listening experience.

[0038] However, if, upon comparison, consensus determination module 164 of
the first hearing assistance device 300 determines that the received
sound classifications and the determined sound classification do not
agree with each another, a consensus between the devices has not been
reached and, in accordance with this disclosure, additional steps can be
taken to resolve the disagreement. In one example implementation,
consensus determination module 164 of the first hearing assistance device
300 can compare the sound classification uncertainty value that it
determined to sound classification uncertainty values determined by and
received from the second hearing assistance device 302 and the other
device 304. Then, consensus determination module 164 of the second
hearing assistance device 302 can compare the three sound classification
uncertainty values, select the sound classification having the lower
uncertainty value, and apply parameter settings appropriate for the
classified sound environment.

[0039] In some examples, processor 120 of hearing assistance devices 300,
302 can wait to control transmission of any data regarding sound
classification until after classification module 162 determines that a
change in environment has occurred. After classification module 162
determines that a change in environment has occurred, processor 120 can
generate a packet for transmission by adding the payload bits
representing the classification results determined by classification
module 162, adding destination information of another hearing assistance
device 100 and/or another device 304 to a destination field, and adding
appropriate headers and trailers.

[0040] In examples implementations that simply exchange classification
results between devices, the transmissions can be 1-way and asynchronous.
In such examples, the wireless data rate can be low, e.g., 128 kilo bits
per second, and can have a radio wake-up time of about 250 milliseconds,
for example. In examples implementations that use one device as a master
device to form a classification consensus, the wireless data rate can be
low, e.g., 64 kilo bits per second, and can have a transmit-receive
turn-around time of about 1.6 milliseconds, for example.

[0041] As indicated above, FIG. 4B depicts a first hearing assistance
device 306, a second hearing device 308, and an off-the-body device 310
in wireless communication with each other and configured to classify a
sound environment by consensus. An example of the off-the-body device
310, e.g., a mobile communication device, such as a mobile phone or a
microphone accessory, was described above with respect to device 200 of
FIG. 3 and, for purposes of conciseness, will not be described in detail
again. In the example configuration depicted in FIG. 4B, the person 311
is holding the off-the-body device 310 but, in other configurations, the
off-the-body device 310 may not be in contact with the person 311.

[0042] The interaction between the hearing assistance device 306, the
second hearing device 308, and the off-the-body device 310 shown in FIG.
4B is substantially similar to the techniques described above with
respect to FIG. 4A between the first hearing assistance device 300, the
second hearing device 302, and the on-the-body 304. Hence, in the
interest of brevity and to avoid redundancy, the interaction between the
hearing assistance device 306, the second hearing device 308, and the
off-the-body device 310 shown in FIG. 4B will not be described again.

[0043] FIG. 5 is a flow diagram illustrating an example of a method for
selecting a classification of a sound environment of a hearing assistance
device in accordance with this disclosure. In the example method shown in
FIG. 5, a first hearing assistance device, e.g., hearing assistance
device 100 of FIG. 1, senses an environmental sound, e.g., via mic 1 102
(400). Amplifier 104 and A/D converter 106 transmit a signal representing
the sensed environmental sound to processor 120. Processor 120 and, in
particular, classification module 162, determines a first classification
of the environmental sound, e.g., music, speech, non-speech, and the like
(402). First hearing assistance device 100 receives, via transceiver 160,
a second classification of the environmental sound from a second hearing
assistance device (404). In some examples, in addition to a second
classification received from a second hearing assistance device, first
hearing assistance device 100 also receives, via transceiver 160, a
second classification of the environmental sound from on-the-body device
and/or an off-the-body device, e.g., a mobile communication device, such
as a mobile phone or a microphone accessory. Upon receiving one or more
second classifications, the first hearing assistance device and, more
particularly, consensus determination module 164 of processor 120,
compares the determined first classification and the received second
classification(s) (406) and selects an operational classification for the
first hearing assistance device based upon the comparison (408).
Processor 120 can then apply parameter settings appropriate for the
selected operational classification to improve the user's listening
experience.

[0044] It is further understood that any hearing assistance device may be
used without departing from the scope and the devices depicted in the
figures are intended to demonstrate the subject matter, but not in a
limited, exhaustive, or exclusive sense. It is also understood that the
present subject matter can be used with a device designed for use in the
right ear or the left ear or both ears of the wearer.

[0045] It is understood that the hearing aids and accessories referenced
in this patent application include a processor. The processor may be a
digital signal processor (DSP), microprocessor, microcontroller, other
digital logic, or combinations thereof. The processing of signals
referenced in this application can be performed using the processor.
Processing may be done in the digital domain, the analog domain, or
combinations thereof. Processing may be done using subband processing
techniques. Processing may be done with frequency domain or time domain
approaches. Some processing may involve both frequency and time domain
aspects. For brevity, in some examples drawings may omit certain blocks
that perform frequency synthesis, frequency analysis, analog-to-digital
conversion, digital-to-analog conversion, amplification, and certain
types of filtering and processing. In various embodiments the processor
is adapted to perform instructions stored in memory which may or may not
be explicitly shown. Various types of memory may be used, including
volatile and nonvolatile forms of memory. In various embodiments,
instructions are performed by the processor to perform a number of signal
processing tasks. In such embodiments, analog components are in
communication with the processor to perform signal tasks, such as
microphone reception, or receiver sound embodiments (i.e., in
applications where such transducers are used). In various embodiments,
different realizations of the block diagrams, circuits, and processes set
forth herein may occur without departing from the scope of the present
subject matter.

[0046] The present subject matter is demonstrated for hearing assistance
devices, including hearing aids, including but not limited to,
behind-the-ear (BTE), in-the-ear (ITE), in-the-canal (ITC),
receiver-in-canal (RIC), or completely-in-the-canal (CIC) type hearing
aids. It is understood that behind-the-ear type hearing aids may include
devices that reside substantially behind the ear or over the ear. Such
devices may include hearing aids with receivers associated with the
electronics portion of the behind-the-ear device, or hearing aids of the
type having receivers in the ear canal of the user, including but not
limited to receiver-in-canal (RIC) or receiver-in-the-ear (RITE) designs.
The present subject matter can also be used in hearing assistance devices
generally, such as cochlear implant type hearing devices and such as deep
insertion devices having a transducer, such as a receiver or microphone,
whether custom fitted, standard, open fitted or occlusive fitted. It is
understood that other hearing assistance devices not expressly stated
herein may be used in conjunction with the present subject matter.

[0047] This application is intended to cover adaptations or variations of
the present subject matter. It is to be understood that the above
description is intended to be illustrative, and not restrictive. The
scope of the present subject matter should be determined with reference
to the appended claims, along with the full scope of legal equivalents to
which such claims are entitled.