Easy To Use Patents Search & Patent Lawyer Directory

At Patents you can conduct a Patent Search, File a Patent Application, find a Patent Attorney, or search available technology through our Patent Exchange. Patents are available using simple keyword or date criteria. If you are looking to hire a patent attorney, you've come to the right place. Protect your idea and hire a patent lawyer.

A measuring unit obtains a head-related impulse response of a user based
on a sound signal which is collected by a microphone worn on an ear of
the user in a state where a predetermined sound as a measurement signal
is outputted from a speaker. A feature amount extraction unit extracts a
feature amount of a frequency characteristic corresponding to the
head-related impulse response. A characteristic selection unit selects a
head-related transfer function from a database, where head-related
transfer functions of many people are respectively made in association
with feature amounts of head-related transfer functions, based on the
extracted feature amount.

Inventors:

FUJII; Yumi; (Yokohama-shi, JP)

Applicant:

Name

City

State

Country

Type

JVC KENWOOD Corporation

Yokohama-shi

JP

Family ID:

1000002969688

Appl. No.:

15/730101

Filed:

October 11, 2017

Related U.S. Patent Documents

Application Number

Filing Date

Patent Number

PCT/JP2016/052711

Jan 29, 2016

15730101

Current U.S. Class:

1/1

Current CPC Class:

H04R 3/04 20130101; H04S 5/02 20130101; H04S 1/00 20130101

International Class:

H04R 3/04 20060101 H04R003/04; H04S 5/02 20060101 H04S005/02

Foreign Application Data

Date

Code

Application Number

Apr 13, 2015

JP

2015-081483

Claims

1. A head-related transfer function selection device comprising: a
measuring unit configured to obtain a head-related impulse response of a
user based on a sound signal which is collected by a microphone worn on
an ear of the user in a state where a predetermined sound as a
measurement signal is outputted from a speaker; a feature amount
extraction unit configured to extract a feature amount of a frequency
characteristic corresponding to the head-related impulse response; and a
characteristic selection unit configured to select a head-related
transfer function from a database, where head-related transfer functions
of many people are respectively made in association with feature amounts
of head-related transfer functions, based on the feature amount extracted
by the feature amount extraction unit.

2. The head-related transfer function selection device according to claim
1, wherein a horizontal angle .theta. is 0.degree. and an elevation angle
.gamma. is 0.degree. in a state where the speaker is positioned in front
of a face of the user, the measuring unit obtains a plurality of
head-related impulse responses when the speaker is moved to a position
where the horizontal angle .theta. is 0.degree. or a predetermined
positive or negative value and then is moved in an arc shape in a
vertical direction to positions where the elevation angles .gamma. are a
plurality of values, respectively, and the feature amount extraction unit
extracts feature amounts based on frequency characteristics corresponding
to the plurality of head-related impulse responses.

3. The head-related transfer function selection device according to claim
2, wherein the measuring unit further obtains a plurality of head-related
impulse responses when the speaker is moved to positions where the
elevation angles .gamma. are 0.degree. and the horizontal angles .theta.
are predetermined positive and negative values, respectively.

4. A head-related transfer function selection method comprising:
generating a predetermined sound as a measurement signal from a speaker;
obtaining a head-related impulse response of a user based on a sound
signal of the predetermined sound which is collected by a microphone worn
on an ear of the user; extracting feature amount of a frequency
characteristic corresponding to the head-related impulse response; and
selecting a head-related transfer function from a database, where
head-related transfer functions of many people are respectively made in
association with feature amounts of head-related transfer functions,
based on the extracted feature amount.

5. A head-related transfer function selection program stored in a
non-transitory storage medium, the program allowing a computer to
execute: a step of obtaining a head-related impulse response of a user
based on a sound signal which is collected by a microphone worn on an ear
of the user in a state where a predetermined sound as a measurement
signal is outputted from a speaker; a step of extracting a feature amount
of a frequency characteristic corresponding to the head-related impulse
response; and a step of selecting a head-related transfer function from
database, where head-related transfer functions of mane people are
respectively made in association with feature amounts of head-related
transfer functions, based on the extracted feature amount.

6. A sound reproduction device comprising: a measuring unit configured to
obtain a head-related impulse response of a user based on a sound signal
which is collected by a microphone worn on an ear of the user in a state
where a predetermined sound as a measurement signal is outputted from a
speaker; a feature amount extraction unit configured to extract a feature
amount of a frequency characteristic corresponding to the head-related
impulse response; a characteristic selection unit configured to select a
head-related transfer function from a database, where head-related
transfer functions of many people are respectively made in association
with feature amounts of head-related transfer functions, based on the
feature amount extracted by the feature amount extraction unit; and
reproduction unit configured to perform a convolution operation sound
data with the head-related transfer function selected by the
characteristic selection unit, and to reproduce the sound data.

Description

CROSS REFERENCE TO RELATED APPLICATION

[0001] This application is a Continuation of PCT Application No.
PCT/JP2016/052711 filed on Jan. 29, 2016, and claims the priority of
Japanese Patent Application No. 2015-081483 filed on Apr. 13, 2015, the
entire contents of both of which are incorporated herein by reference.

BACKGROUND

[0002] The present disclosure relates to a head-related transfer function
selection device, a head-related transfer function selection method, a
head-related transfer function selection program capable of selecting a
head-related transfer function similar to that of a user, and a sound
reproduction device that can reproduce a sound signal using a
head-related transfer function similar to that of the user.

[0003] When the user listens to a sound through headphones (earphones)
reproducing a sound signal, a phenomenon called in-head localization, in
which the user feels as if a sound is ringing in his or her head, is
likely to occur. By utilizing a technique of localizing the sound using a
head-related transfer function of a dummy head or the head of another
user such that the user feels as if the sound is ringing outside his or
her head, the phenomenon called in-head localization can be reduced.

SUMMARY

[0004] Characteristics of a head-related transfer function vary depending
on the shape of the head or the auricle. Accordingly, it is desirable to
localize a sound using a head-related transfer function of the user who
wears headphones and listens to the sound such that the user feels as if
the sound is ringing outside his or her read. However, it is not easy for
the user to measure the head-related transfer function himself or herself
in daily life.

[0005] A first aspect of the embodiment provides a head-related transfer
function selection device including: a measuring unit configured to
obtain a head-related impulse response of a user based on a sound signal
which is collected by a microphone worn on an ear of the user in a state
where a predetermined sound as a measurement signal is outputted from a
speaker; a feature amount extraction unit configured to extract a feature
amount of a frequency characteristic corresponding to the head-related
impulse response; and a characteristic selection unit configured to
select a head-related transfer function from a database, where
head-related transfer functions of many people are respectively made in
association with feature amounts of head-related transfer functions,
based on the feature amount extracted by the feature amount extraction
unit.

[0006] A second aspect of the embodiments provide a head-related transfer
function selection method including: generating a predetermined sound as
a measurement signal from a speaker; obtaining a head-related impulse
response of a user based on a sound signal of the predetermined sound
which is collected by a microphone worn on an ear of the user; extracting
a feature amount of a frequency characteristic corresponding to the
head-related impulse response; and selecting a head-related transfer
function from a database, where head-related transfer functions of many
people are respectively made in association with feature amounts of
head-related transfer functions, based on the extracted feature amount.

[0007] A third aspect of the embodiment provides a head-related transfer
function selection program stored in a non-transitory storage medium, the
program allowing a computer to execute: a step of obtaining a
head-related impulse response of a user based on a sound signal which is
collected by a microphone worn on an ear of the user in a state where a
predetermined sound as a measurement signal is outputted from a speaker;
a step of extracting a feature amount of a frequency characteristic
corresponding to the head-related impulse response; and a step of
selecting a head-related transfer function from a database, where
head-related transfer functions of many people are respectively made in
association with feature amounts of head-related transfer functions,
based on the extracted feature amount.

[0008] A fourth aspect of the embodiment provides a sound. reproduction
device including: a measuring unit configured to obtain a head-related
impulse response of user based on a sound signal which is collected by a
microphone worn on an ear of the user in a state where a predetermined
sound as a measurement signal is outputted from a speaker; a feature
amount extraction unit configured to extract a feature amount of a
frequency characteristic corresponding to the head-related impulse
response; a characteristic selection unit configured to select a
head-related transfer function from a database, where head-related
transfer functions of many people are respectively made in association
with feature amounts of head-related transfer functions, based on the
feature amount extracted by the feature amount extraction unit; and a
reproduction unit configured to perform a convolution operation sound
data with the head-related transfer function selected by the
characteristic selection unit, and to reproduce the sound data.

BRIEF DESCRIPTION OF THE DRAWINGS

[0009] FIG. 1 is block diagram illustrating a head-related transfer
function selection device and a sound reproduction device according to at
least one embodiment.

[0010] FIG. 2 is a flowchart illustrating the first measurement example
for measuring a head-related impulse response of a user.

[0011] FIG. 3 is a schematic diagram illustrating a state where a portable
terminal is moved to a position in front of a face where a horizontal
angle is 0.degree. and an elevation angle is 0.degree..

[0012] FIG. 4 is a schematic diagram illustrating a state where the
portable terminal is moved from a position where the elevation angle is
0.degree. to positions where the elevation angles are 30.degree. and
60.degree., respectively.

[0013] FIG. 5 is a diagram illustrating a measurement pattern obtained
from the first measurement example.

[0014] FIG. 6 is a characteristic diagram illustrating head-related
transfer functions when a sound of a measurement signal is outputted from
a speaker in a dead-sound chamber at the horizontal angle of 0.degree.
and at different elevation angles.

[0015] FIG. 7 is a flowchart illustrating the second measurement example
for measuring a head-related impulse response of a user.

[0016] FIG. 8 is a schematic diagram illustrating a state where the
portable terminal is moved from a position where the horizontal angle is
-30.degree. to positions where the horizontal angles are 0.degree. and
30.degree., respectively.

[0017] FIG. 9 is a diagram illustrating measurement patterns obtained from
the second measurement example.

[0018] FIG. 10 is a flowchart illustrating the third measurement example
for measuring a head-related impulse response of a user.

[0019] FIG. 11 is a diagram illustrating measurement patterns obtained
from the third measurement example.

[0020] FIG. 12 is a table collectively illustrating the first to fourth
measurement examples.

DETAILED DESCRIPTION

[0021] Hereinafter, a head-related transfer function selection device, a
head-related transfer function selection method, a head-related transfer
function selection program, and a sound reproduction device according to
the embodiment will be described with reference to the accompanying
drawings.

[0022] First, the overall configuration of the head-related transfer
function selection device and the sound reproduction device according to
the embodiment will be described with reference to FIG. 1.

[0023] In FIG. 1, a general-purpose portable terminal 100 functions as a
head-related transfer function selection device and a sound reproduction
device. For example, the portable terminal 100 may be a mobile phone such
as smartphone.

[0024] The portable terminal 100 includes a camera an acceleration sensor
2, and an angular velocity sensor 3. The camera 1, the acceleration
sensor 2, and the angular velocity sensor 3 are connected to a controller
4 which is configured by, for example, a CPU. The controller 4 includes a
measuring unit 41, a feature amount extraction unit 42, a characteristic
selection unit 43, and a reproduction unit 44.

[0025] An image signal obtained by the camera 1 capturing an object is
inputted to the measuring unit 41 and then is supplied from the measuring
unit 41 to the display 10 to display an image. When the user performs a
predetermined operation through an operation unit (not illustrated), the
camera 1 may capture the object to generate the image signal.

[0026] An acceleration detection signal detected by the acceleration
sensor 2 and an angular velocity detection signal, which represents the
tilt or angle of the portable terminal 100, detected by the angular
velocity sensor 3 are inputted to the measuring unit 41. The acceleration
sensor 2 and the angular velocity sensor 3 may operate at all times in a
state where power is supplied to the portable terminal 100.

[0027] The measuring unit 41 can generate digital sound data which is a
predetermined measurement signal for measuring a head-related impulse
response (HRIR) of the user. When the user performs a predetermined
operation through the operation unit, the measuring unit 41 supplies the
digital sound data to a D/A converter 5.

[0028] The D/A converter 5 converts the digital sound data into an analog
sound signal, and supplies the converted analog sound signal to a speaker
6. The speaker 6 may be a built-in speaker of the portable terminal 100.
As the speaker 6, an external speaker may be used. The speaker 6 may be a
monaural speaker or a stereo speaker.

[0029] Headphones 40 may be attached to a sound signal output terminal 7.
The way to use the headphones 40 will be described below.

[0030] A microphone 20 is connected to a microphone connection terminal 8.
It is preferable that the microphone 20 is an earphone-type microphone
which is wearable on the auricle of the user. The microphone 20 may be a
monaural microphone or a stereo microphone. In the embodiment, the
microphone 20 is a monaural microphone.

[0031] When a sound is outputted from the speaker 6 in a state where the
user positions the portable terminal 100 in front of the face of the user
as described below, the microphone 20 collects the sound. An analog sound
signal outputted from the microphone 20 is inputted to an A/D converter 9
through the microphone connection terminal 8. The A/D converter 9
converts the analog sound signal into digital sound data, and supplies
the converted digital sound data to the measuring unit 41.

[0032] The digital sound data inputted to the measuring unit 41 represents
the HRIR of the user which varies depending on the shape of the head or
auricle of the user.

[0033] The measuring unit 41 obtains HRIRs when the portable terminal 100
is positioned at a plurality of positions, and temporarily stores the
obtained HRIRs in a storage unit 11. The HRIRs stored in the storage unit
11 are inputted to the feature amount extraction unit 42.

[0034] The feature amount extraction unit 42 transforms the inputted HRIRs
to generate head-related transfer functions (HRTFs) by using Fourier
transformation. After the transformation of the HRIRs into the HRTFs, the
measuring unit 41 may store the HRTFs in the storage unit 11.

[0035] The feature amount extraction unit 42 extracts a feature amount
from the HRTF of the user. The details of the feature amount will be
described below. The feature amount extracted by the feature amount
extraction unit 42 is inputted to the characteristic selection unit 43.

[0036] An external server 30 stores a database 301 where HRTFs of many
people are respectively made in association with feature amounts of HRTFs
described below. The characteristic selection unit 43 accesses a server
30 through a communication unit 12, and selects an HRTF having a feature
amount, which is most similar to the feature amount extracted by the
feature amount extraction unit 42, from the database 301.

[0037] The selected HRTF is inputted to the characteristic selection unit
43 through the communication unit 12. The selected HRTF is substantially
the same as the HRTF of the user. The characteristic selection unit 43
supplies the HRTF to the reproduction unit 44.

[0038] The database 301 may be built in the portable terminal 100 in
advance. The portable terminal 100 may access the server 30, read data of
the database 301, and store the same data as that of the database 301 in
the storage unit 11 or another storage unit (not illustrated).

[0039] Digital sound data to be reproduced. by the portable terminal 100
is inputted from an external device to the reproduction unit 44 through a
sound signal input terminal 13. Digital sound data stored in the storage
unit, which is built in the portable terminal 100, may be inputted to the
reproduction unit 44. In case where an analog sound signal is inputted
from an external device, the analog sound signal may be converted into
digital sound data by the A/D converter 9 or another A/D converter such
that the converted digital sound data is supplied to the reproduction
unit 44.

[0040] The reproduction unit 44 includes a filter 441 that performs a HRTF
convolution operation with the digital sound data. The filter 441
performs the HRTF convolution operation with the inputted digital sound
data selected by the characteristic selection unit 43 and supplies the
convolved data to the D/A converter 5. The D/A converter 5 converts the
digital sound data, which is supplied from the reproduction unit 44, into
an analog sound signal.

[0041] The analog sound signal outputted from the D/A converter 5 is
supplied to the headphones 40 through the sound signal output terminal 7.
The headphones 40 are an arbitrary type of headphones such as an overhead
type, an inner ear type, or a canal type. Examples of the headphones
described herein include earphones. The headphones 40 and the microphone
20 may be integrated.

[0042] The user wears the headphones 40 on his or her head or the auricle
and listens to a sound which is generated based on the analog sound
signal outputted from the sound signal output terminal 7. Since
substantially the same HRTF as that of the user is convolved by the
filter 441, the user can listen to the sound which is localized outside
the head in a state where the sound is adjusted to be suitable for the
user.

[0043] In addition, the user can listen to the sound in a state where the
user feels as if left and right sounds are ringing in predetermined
angular directions as described below.

[0044] Specific measurement examples for measuring the HRIR of the user
will be sequentially described.

First Measurement Example

[0045] The first measurement example will be described using a flowchart
illustrated in FIG. 2. The flowchart illustrated in FIG. 2 or a flowchart
described below includes a step regarding an operation which is performed
by the user, and a step regarding a process which is performed in the
portable terminal 100.

[0046] In step S11 of FIG. 2, the user wears the microphone 20 on one ear
and moves the portable terminal 100 to a position where the elevation
angle .gamma. is 0.degree. and the horizontal angle .theta. is 0.degree..

[0047] Specifically, as illustrated in FIG. 3, the user wears the
microphone 20 on the left ear 50L, and moves the portable terminal 100 in
front of the head 50 (face), for example. It is assumed that, in a state
where the portable terminal 100 is positioned in front of the face, the
horizontal angle .theta. is 0.degree..

[0048] In addition, in order to verify that the portable terminal 100 is
correctly positioned at a desired position, the position of the portable
terminal may be adjusted using an image obtained by the camera 1,
information obtained by the acceleration sensor 2, and the angular
velocity sensor 3 so as to be positioned in front of the face.

[0049] When the portable terminal 100 is moved around the center of the
head 50 in an arc shape and in a vertical direction as illustrated in
FIG. 4, the angle in the vertical direction is set as the elevation angle
.gamma.. It is assumed that, in a state where the user moves the portable
terminal 100 to a position at the height of the left ear 50L or the right
eye 50R, the elevation angle .gamma. is 0.degree..

[0050] The position of the portable terminal 100 indicated by a solid line
in FIGS. 3 and 4 is a setting position of the portable terminal 100 in
step S11.

[0051] In step S13, the user moves the portable terminal 100 from the
position where the elevation angle .gamma. is 0.degree. to positions
where the elevation angles .gamma. are 30.degree. and 60.degree.,
respectively, in a state where the sound of the measurement signal is
outputted from the speaker 6. At this time, the measuring unit 41 obtains
HRIRs at the elevation angles .gamma. of 0.degree., 30.degree., and
60.degree., respectively.

[0052] The image signal obtained by camera 1 capturing the object, the
acceleration detection signal outputted from the acceleration sensor 2,
and the angular velocity detection signal outputted from the angular
velocity sensor 3 are inputted to the measuring unit 41. Accordingly, the
measuring unit 41 may obtain the HRIRs when the portable terminal 100 is
moved to the positions where the elevation angles .gamma. are 0.degree.,
30.degree., and 60.degree., respectively.

[0053] It is not necessary for the user to pay special attention to the
elevation angle .gamma., and it is sufficient that the user moves the
portable terminal 100 in the vertical direction to a position where the
elevation angle .gamma. is in a range of 0.degree. to 60.degree.. At this
time, in a case where deviation of the portable terminal 100 from the
moving path during the measurement is detected, based on the image
obtained from the camera 1 and the information obtained from the
acceleration sensor 2 and the angular velocity sensor 3, the path may be
corrected through a process of displaying the correct path on the display
10, for example.

[0054] Next, in step S14, the user wears the microphone 20 on the other
ear and moves the portable terminal 100 to a position where the elevation
angle .gamma. is 0.degree. and the horizontal angle .theta. is 0.degree..

[0055] In step S16, the user moves the portable terminal 100 from the
position where the elevation angle .gamma. is 0.degree. to positions
where the elevation angles .gamma. are 30.degree. and 60.degree.,
respectively, in a state where the sound of the measurement signal is
outputted from the speaker 6. At this time, the measuring unit 41 obtains
HRIRs at the elevation angles .gamma. of 0.degree., 30.degree., and
60.degree., respectively.

[0056] A measurement pattern obtained from the first measurement example
is the measurement pattern MP1 illustrated in FIG. 5. The elevation
angles of 0.degree., 30.degree., and 60.degree. are merely examples.
Another elevation angle may be adopted, and the number of elevation
angles .gamma. is not limited to three. The number of elevation angles
.gamma. is preferably two or more.

[0057] In step S17, the feature amount extraction unit 42 extracts a
feature amount of an HRIR. For example, the feature amount extraction
unit 42 may extract a feature amount of an HRIR as follows.

[0058] In FIG. 6, a characteristic indicated by a solid line shows an HRTF
which is measured when a sound of a measurement signal is outputted from
the speaker 6 in a dead-sound chamber at a horizontal angle .theta. of
0.degree. and an elevation angle .gamma. of 0.degree.. A characteristic
indicated by a one-dot chain line shows an HRTF which is measured when a
sound of a measurement signal is outputted from the speaker 6 in a
dead-sound chamber at a horizontal angle .theta. of 0.degree. and an
elevation angle .gamma. of 10.degree..

[0059] The characteristics of the HRTFs illustrated in FIG. 6 vary
depending on the shape of the head of an individual and the shape of an
ear thereof. Massachusetts Institute of Technology or Itakura Laboratory
at Nagoya University et al. release a database of HRTFs measured at
incidence angles in whole directions on the Internet.

[0060] FIG. 6 is a diagram illustrating measurement data of a specific
test subject at a horizontal angle of 0.degree. and elevation angles of
0.degree. to 30.degree. which is obtained from the database of HRTFs
measured in a dead-sound chamber which are released by Advanced Acoustic
information Systems, Research Institute of Electrical Communication,
Tohoku University
(http://www.ais.riec.tohoku.ac.jp/lab/db-hrtf/index-j.html).

[0061] A characteristic indicated by a broken line shows an HRTF which is
measured when a sound of a measurement signal is outputted from the
speaker 6 in a dead-sound chamber at a horizontal angle .theta. of
0.degree. and an elevation angle .gamma. of 20.degree.. A characteristic
indicated by a two-dot chain line shows an HRTF which is measured when a
sound of a measurement signal is outputted from the speaker 6 in a
dead-sound chamber at a horizontal angle .theta. of 0.degree. and an
elevation angle .gamma. of 30.degree..

[0062] As illustrated in FIG. 6, frequencies of a local peak P2 in a
frequency range of 10 kHz to 20 kHz are substantially the same at the
elevation angles .gamma. of 0.degree. to 30.degree.. Here, frequencies of
the peak P2 are also substantially the same at elevation angles .gamma.
of 30.degree. to 60.degree. (not illustrated).

[0063] When the present inventors inspected measurement data of other test
subjects and created a graph with reference to the above-described
database, the following was found. When the same test subject was
inspected, the frequencies of the peak P2 were the same or substantially
the same at the elevation angles .gamma. of 0.degree. to 30.degree.. On
the other hand, when different test subjects were compared to each other,
the frequencies of the peak P2 were different from each other at the
elevation angles 0.degree. to 30.degree.. Therefore, the feature amount
extraction unit 42 extracts the frequencies of the peak P2 as a feature
amount of an HRTF of an individual user.

[0064] In addition to the frequencies of the peak P2, the feature amount
extraction unit 42 may extract a variation in the amplitude of the peak
P2 corresponding to the elevation angle .gamma. as a feature amount of an
HRTF.

[0065] A feature amount of an HRTF measured by the measurement pattern MP1
of FIG. 5 will be called "feature amount 1". In the database 301, HRTFs
of many people are respectively made in association with at least feature
amounts 1.

[0066] Returning to FIG. 2, in step S18, the characteristic selection unit
43 selects an HRTF having a feature amount, which is most similar to the
feature amount 1 extracted by the feature amount extraction unit 42, from
the database 301, sets the selected HRTF to the reproduction unit 44, and
ends the process.

[0067] For example, the HRTF is data of HRTF (.theta.,0) and HRTF
(-.theta.,0) for localizing left and right sounds in directions of
horizontal angles .+-.74 .degree. at an elevation angle .gamma.. The
horizontal angle .theta..degree. is 30.degree., for example.

Second Measurement Example

[0068] The second measurement example will be described using a flowchart
illustrated in FIG. 7. In step S21 of FIG. 7, the user wears the
microphone 20 on one ear and moves the portable terminal 100 to a
predetermined position in the horizontal direction where the elevation
angle .gamma. is 0.degree..

[0069] Specifically, as illustrated in FIG. 8, the user wears the
microphone 20 on the left ear 50L, for example, and moves the portable
terminal 100 to the left side with respect to the front of the head 50
(face), for example. In the second measurement example, as in the first
measurement example, in order to verify that the portable terminal 100 is
correctly positioned at a desired position, the position of the portable
terminal may be adjusted using the image obtained by the camera 1, the
information obtained by the acceleration sensor 2, and the angular
velocity sensor 3, so as to be positioned in front of the face.

[0070] In step S22, in a state where a sound of a measurement signal is
outputted from the speaker 6, the user moves the portable terminal 100
around the center of the head 50 in an arc shape in the horizontal
direction as indicated by a two-dot chain line in FIG. 8. At this time,
the measuring unit 41 obtains HRIRs at the horizontal angles .theta. of
-30.degree. and 30.degree..

[0071] Here, similarly, the image signal obtained by camera 1 imaging the
object, the acceleration detection signal outputted from the acceleration
sensor 2, and the angular velocity detection signal outputted from the
angular velocity sensor 3 are inputted to the measuring unit 41.
Accordingly, the measuring unit 41 may obtain the HRIRs when the portable
terminal 100 is moved to the positions where the horizontal angles
.theta. are -30.degree. and 30.degree., respectively.

[0072] It is not necessary for the user to pay special attention to the
horizontal angle .theta., and it is sufficient that the user moves the
portable terminal 100 in the horizontal direction to a position where the
horizontal angle .theta. is in a range of -30.degree. to 30.degree..

[0073] Next, in step S23, the user moves the portable terminal 100 to a
position where the horizontal angle .theta. is 0.degree. in a state where
the sound of the measurement signal is outputted from the speaker 6, and
then moves the portable terminal 100 from a position where the elevation
angle .gamma. is 0.degree. to positions where the elevation angles
.gamma. are 30.degree. and 60.degree., respectively. At this time, the
measuring unit 41 obtains HRIRs at the elevation angles .gamma. of
0.degree., 30.degree., and 60.degree., respectively.

[0074] Next, in step S24, the user wears the microphone 20 on the other
ear and, as in the case of step S21, moves the portable terminal 100 to a
predetermined position in the horizontal direction where the elevation
angle .gamma. is 0.degree..

[0075] In step S25, in a state where the sound of the measurement signal
is outputted from the speaker 6, the user moves the portable terminal 100
around the center of the head 50 in an arc shape in the horizontal
direction. At this time, the measuring unit 41 obtains HRIRs at the
horizontal angles .theta. of -30.degree. and 30.degree..

[0076] Next, in step S26, the user moves the portable terminal 100 to a
position where the horizontal angle .theta. is 0.degree. in a state where
the sound of the measurement signal is outputted from the speaker 6, and
then moves the portable terminal 100 from a position where the elevation
angle .gamma. is 0.degree. to positions where the elevation angles
.gamma. are 30.degree. and 60.degree., respectively. At this time, the
measuring unit 41 obtains HRIRs at the elevation angles .gamma. of
0.degree., 30.degree., and 60.degree., respectively.

[0077] In the second measurement example, as in the first measurement
example, in a case where deviation of the portable terminal 100 from the
moving path during the measurement is detected based on the image
obtained from the camera 1 and the information obtained from the
acceleration sensor 2 and the angular velocity sensor 3, the path may be
corrected through a process of displaying a correct path on the display
10, for example.

[0078] Measurement patterns obtained from the second measurement example
are the measurement pattern MP1 and a measurement pattern MP2 illustrated
in FIG. 9. In FIG. 7, the measurement using the measurement pattern MP1
is performed after the measurement using the measurement pattern MP2, but
the order may be reversed.

[0079] Likewise, the elevation angles .gamma. of 0.degree., 30.degree. ,
and 60.degree. are merely examples. Another elevation angle may be
adopted, and the number of elevation angles .gamma. is not limited to
three. The number of elevation angles .gamma. is preferably two or more.
The horizontal angle .theta. is not limited to -30.degree. and
30.degree..

[0080] In step S27, the feature amount extraction unit 42 extracts a
feature amount of an HRTF. For example, the feature amount extraction
unit 42 may extract a feature amount of an HRIR as follows.

[0081] When the horizontal angle .gamma. is -30.degree. in the measurement
pattern MP2 of FIG. 9, the frequencies of the peak 2 will be called a
feature amount 4. When the horizontal angle .theta. is 30.degree. in the
measurement pattern MP2 of FIG. 9, the frequencies, of the peak 2 will be
called a feature amount 5. In the database 301, HRTFs of many people are
respectively made in association with at least feature amounts 1, 4, and
5.

[0082] Frequencies of a peak P1 at about 4 kHz in FIG. 6 as a feature
amount may be added to the feature amounts 4 and 5. The frequencies of
the peak P1 vary depending on the individual people. Therefore, the
frequencies of the peak P1 can be set as a feature amount of an HRTF of
an individual user. An amplitude value of the peak P1 may be added as a
feature amount of an HRIR.

[0083] Returning to FIG. 7, in step S28, the characteristic selection unit
43 selects an HRTF having feature amounts, which are most similar to the
feature amounts 1, 4, and 5 extracted by the feature amount extraction
unit 42, from the database 301, sets the selected HRTF to the
reproduction unit 44, and ends the process.

[0084] Specific data of the HRTF is the same as that of the first
measurement example. For example, the HRTF is data of HRTF (.theta.,0)
and HRTF (-.theta.,0) for localizing left and right sounds in directions
of horizontal angles .+-..theta..degree. at an elevation angle .gamma..
The horizontal angle .theta..degree. is 30.degree., for example.

Third Measurement Example

[0085] The third measurement example will be described using a flowchart
illustrated in FIG. 10. In step S301 of FIG. 10, the user wears the
microphone 20 on one ear and moves the portable terminal 100 to a
position where an elevation angle .gamma. is 0.degree. and a horizontal
angle .theta. is -30.degree..

[0086] A position of the portable terminal 100 indicated by a solid line
in FIG. 8 is a setting position of the portable terminal 100 in step
S301. In the third measurement example, as in the first or second
measurement example, in order to verify that the portable terminal 100 is
correctly positioned at a desired position, the position of the portable
terminal may be adjusted using the image obtained by the camera 1, the
information obtained by the acceleration sensor 2, and the angular
velocity sensor 3 so as to be positioned in front of the face.

[0087] In step S302, in a state where the sound of the measurement signal
is outputted from the speaker 6, the user moves the portable terminal 100
in the elevation angle direction. At this time, the measuring unit 41
obtains HRIRs at the elevation angles .gamma. of 0.degree., 30.degree.,
and 60.degree., respectively.

[0088] Next, in step S303, the user moves the portable terminal 100 to a
position where the elevation angle .gamma. is 0.degree. and the
horizontal angle .theta. is 30.degree..

[0089] In step S304, in a state where the sound of the measurement signal
is outputted from the speaker 6, the user moves the portable terminal 100
in the elevation angle direction. At this time, the measuring unit 41
obtains HRIRs at the elevation angles .gamma. of 0.degree., 30.degree.,
and 60.degree., respectively.

[0090] Next, in step S305, the user wears the microphone 20 on the other
ear and, as in the case of step S301, moves the portable terminal 100 to
a position where the elevation angle .gamma. is 0.degree. and the
horizontal angle .theta. is -30.degree..

[0091] In step S306, in a state where the sound of the measurement signal
is outputted from the speaker 6, the user moves the portable terminal 100
in the elevation angle direction. At this time, the measuring unit 41
obtains HRIRs at the elevation angles .gamma. of 0.degree., 30.degree.,
and 60.degree., respectively.

[0092] Next, in step S307, the user moves the portable terminal 100 to a
position where the elevation angle .gamma. is 0.degree. and the
horizontal angle .theta. is 30.degree..

[0093] In step S308, in a state where the sound of the measurement signal
is outputted from the speaker 6, the user moves the portable terminal 100
in the elevation angle direction. At this time, the measuring unit 41
obtains HRIRs at the elevation angles of 0.degree., 30.degree., and
60.degree., respectively.

[0094] In the third measurement example, as in the first or second
measurement example, in a case where deviation of the portable terminal
100 from the moving path during the measurement is detected based on the
image obtained from the camera 1 and the information obtained from the
acceleration sensor 2 and the angular velocity sensor 3, the path may be
adjusted through a process of displaying a correct path on the display
10, for example.

[0095] Measurement patterns obtained from the third measurement example
are measurement patterns MP3 and MP4, as illustrated in FIG. 11. In FIG.
10, the measurement using the measurement pattern MP4 is performed after
the measurement using the measurement pattern MP3, but the order may be
reversed.

[0096] Likewise, the elevation angles .gamma. of 0.degree., 30.degree.,
and 60.degree. are merely examples. Another elevation angle may be
adopted, and the number of elevation angles .gamma. is not limited to
three. The number of elevation angles .gamma. is preferably two or more.
The horizontal angle .theta. is not limited to -30.degree. and
30.degree..

[0097] In step S309, the feature amount extraction unit 42 extracts a
feature amount of an HRTF. For example, the feature amount extraction
unit 42 may extract a feature amount of an HRIR as follows.

[0098] When the horizontal angle .theta. is -30.degree. and the elevation
angles .gamma. are 0.degree., 30.degree., and 60.degree. in the
measurement pattern MP3 of FIG. 11, the frequencies of the peak 2 will be
called a feature amount 2. When the horizontal angle .theta. is
30.degree. and the elevation angles .gamma. are 0.degree., 30.degree.,
and 60.degree. in the measurement pattern MP4 of FIG. 11, the frequencies
of the peak 2 will be called a feature amount 3.

[0099] In the database 301, HRTFs of many people are respectively made in
association with at least feature amounts 2 and 3.

[0100] In addition to the frequencies of the peak P2, the feature amount
extraction unit 42 may extract a variation in the amplitude of the peak
P2 corresponding to the elevation angle .gamma. as a feature amount of an
HRTF.

[0101] Returning to FIG. 10, in step S310, the characteristic selection
unit 43 selects an HRTF having feature amounts, which are most similar to
the feature amounts 2 and 3 extracted by the feature amount extraction
unit 42, from the database 301, sets the selected HRTF to the
reproduction unit 44, and ends the process.

[0102] Specific data of the HRTF is similar to that of the first
measurement example. For example, the HRTF is data of HRTF (.theta.,0)
and HRTF (-.theta.,0) for localizing the left and right sounds in
directions of horizontal angles .+-..theta..degree. at an elevation angle
.gamma.. The horizontal angle .theta..degree. is 30.degree., for example.

[0103] As the data of HRTF (.theta.,0) and HRTF (-.theta.,0), the
characteristic selection unit 43 does not necessarily select a pair of
data stored in the database 301. HRTF (.theta.,0) of one pair of data
HRTF (.theta.,0) and HRTF (-.theta.,0) stored in the database 301 may be
combined with HRTF (-.theta.,0) of another pair of data HRTF (.theta.,0)
and HRTF (-.theta.,0).

[0104] In the third measurement example, the feature amount 2 obtained
from the measurement pattern MP3 of FIG. 11 and the feature amount 3
obtained from the measurement pattern MP4 of FIG. 11 are used, but the
feature amounts 4 and 5 in the second measurement example may be added
thereto.

Fourth Measurement Example

[0105] The user may perform the fourth measurement example for measuring
all the above-described measurement patterns MP1 to MP4. In this case, in
the database 301, HRTFs of many people are respectively made in
association with the feature amounts 1 to 5.

[0106] The characteristic selection unit 43 selects an HRTF having feature
amounts, which are most similar to the feature amounts 1 to 5 extracted
by the feature amount extraction unit 42, from the database 301, and sets
the selected HRTF to the reproduction unit 44.

[0107] FIG. 12 collectively illustrates the above-described first to
fourth measurement examples. As illustrated in FIG. 12, in the first
measurement example, in order to select the HRTF, the feature amount 1
obtained from the measurement pattern MP1 where the horizontal angle
.theta. is 0.degree. and the elevation angles .gamma. are 0.degree.,
30.degree., and 60.degree. is used.

[0108] In the second measurement example, in order to select the HRTF, the
feature amount 1 obtained from the measurement pattern MP1 where the
horizontal angle .theta. is 0.degree. and the elevation angles .gamma.
are 0.degree., 30.degree., and 60.degree., and the feature amounts 4 and
5 obtained from the measurement pattern MP2 where the horizontal angles
.theta. are -30.degree. and 30.degree. and the elevation angle .gamma. is
0.degree. are used.

[0109] In the third measurement example, in order to select the HRTF, the
feature amount 2 obtained from the measurement pattern MP3 where the
horizontal angle .theta. is -30.degree. and the elevation angles .gamma.
are 0.degree., 30.degree., and 60.degree., and the feature amount 3
obtained from the measurement pattern MP4 where the horizontal angle
.theta. is 30.degree. and the elevation angles .gamma. are 0.degree.,
30.degree., and 60.degree. are used.

[0110] In the fourth measurement example, in order to select the HRTF, the
feature amounts 1 to 5 obtained from the measurement patterns MP1 to MP4
are used.

[0111] As the number of measurement patterns increases, it becomes easier
to extract the feature amounts. Accordingly, the second or third
measurement example is preferable to the first measurement example, and
the fourth measurement example is most preferable. However, as the number
of measurement patterns increases, the measurement becomes more
complicated.

[0112] As described above, the head-related transfer function selection
device according to the embodiment includes the measuring unit 41, the
feature amount extraction unit 42, and the characteristic selection unit
43.

[0113] The measuring unit 41 obtains a head-related impulse response of a
user based on a sound signal which is collected by the microphone 20 worn
on an ear of the user in a state where a predetermined sound as a
measurement signal is outputted from the speaker 6.

[0114] The feature amount extraction unit 42 extracts a feature amount of
frequency characteristic corresponding to the head-related impulse
response. The characteristic selection unit 43 selects a head-related
transfer function from the database 301, where head-related transfer
functions of many people are respectively made in association with
feature amounts of head-related transfer functions, based on the feature
amount extracted by the feature amount extraction unit 42.

[0115] It is assumed that, in a state where the speaker 6 (portable
terminal 100) is positioned in front of the face of the user, the
horizontal angle .theta. is 0.degree. and the elevation angle .gamma. is
0.degree.. The measuring unit 41 preferably obtains a plurality of
head---related impulse responses when the speaker 6 is moved to a
position where the horizontal angle .theta. is 0.degree. or a
predetermined positive or negative value, and then is moved in an arc
shape in a vertical direction to positions where the elevation angles
.gamma. are a plurality of values, respectively.

[0117] The measuring unit 41 may further obtain a plurality of
head-related impulse responses when the speaker 6 is moved to positions
where the elevation angles .gamma. are 0.degree. and the horizontal
angles .theta. are predetermined positive and. negative values,
respectively.

[0118] The head-related transfer function selection method according to
the embodiment includes: generating a predetermined sound as a
measurement signal from the speaker 6; and obtaining a head-related
impulse response of a user based on a sound signal of the predetermined
sound which is collected by the microphone 20, worn on an ear of the
user.

[0119] The head-related transfer function selection method according to
the embodiment includes: extracting a feature amount of a frequency
characteristic corresponding to the head-related impulse response; and
selecting a head-related transfer function from a database, where
head-related transfer functions of many people are respectively made in
association with feature amounts of head-related transfer functions,
based on the extracted feature amount.

[0120] In accordance with the head-related transfer function selection
device and the head-related transfer function selection method according
to the embodiment, a head-related transfer function similar to that of
the user himself/herself can be easily selected.

[0121] A part of the measuring unit 41, the feature amount extraction unit
42, and the characteristic selection unit 43 may be configured by a
computer program (head-related transfer function selection program). A
part of the reproduction unit 44 may be configured by a computer program.
The computer program may be stored in a computer-readable non-transitory
storage medium, or may be provided through an arbitrary communication
line such as the internet. The computer program may be a program product.

[0122] The head-related transfer function selection program according to
the embodiment allows a computer to execute a step of obtaining a
head-related impulse response of a user based on a sound signal which is
collected by the microphone 20, worn on an ear of the user in a state
where a predetermined sound as a measurement signal is outputted from the
speaker 6.

[0123] The head-related transfer function selection program according to
the embodiment allows a computer to execute a step of extracting a
feature amount of a frequency characteristic corresponding to the
head-related impulse response.

[0124] The head-related transfer function selection program according to
the embodiment allows a computer to execute a step of selecting a
head-related transfer function from the database 301, where head-related
transfer functions of many people are respectively made in association
with feature amounts of head-related transfer functions, based on the
extracted feature amount.

[0125] In the head-related transfer function selection program according
to the embodiment, a head-related transfer function similar to that of
the user himself/herself can be easily selected, and a localization
effect similar to characteristics of the user himself/herself can be
easily realized.

[0126] The sound reproduction device according to the embodiment includes:
the head-related transfer function selection device according to the
embodiment; and the reproduction unit 44 that performs the convolution
operation sound data with the head-related transfer function selected by
the characteristic selection unit 43 and reproduces the sound data.
Accordingly, in accordance with the sound reproduction device according
to the embodiment, a sound signal can be reproduced using a head-related
transfer function similar to that of the user himself/herself.

[0127] The present invention is not limited to the above-described
embodiment, and various modifications can be made within a range not
departing from the scope of the present invention. When the head-related
transfer function selection device according to the embodiment is
configured, the selection between hardware and software is arbitrary.