This application is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2010-212825, filed on Sep. 22, 2010, the entire contents of which are incorporated herein by reference.
The embodiments discussed herein are directed to a terminal device, a mobile terminal, and a navigation program.
There is a known navigation technology for guiding users to target locations by transferring the target locations to the users. When such navigation is performed, the target locations are transferred via, in addition to video images, sounds in order for the locations to be intuitively perceived. The “target” mentioned here can be any targets as long as users reach their desired destinations, including sites, persons, and mobile units.
One example of a navigation device is the remaining-distance switching type described here. This type of navigation device divides, for each nearby directional point, the remaining distance from its own vehicle to a nearby directional point into multiple sections and stores, as sets of sound effects, a sound effect allocated to each section. Then, from among the sets of sound effects associated with the nearby directional points extracted from map data, the remaining-distance switching type navigation device performs guidance by replaying the sound effect of a section corresponding the distance from its own vehicle to a nearby directional point. Accordingly, even when the nearby directional points are continuously present, the remaining distance to each nearby directional point is guided.
Another example of a navigation device is the vehicle-speed switching type described here. This type of navigation device performs the guidance by determining, in accordance with vehicle speed, the number of types of the series of sound effects for the guidance that is replayed until the vehicle reaches the directional point and by replaying the determined types of the series of sound effects for guidance. Accordingly, a sense of the distance to the directional point can be easily recognized.
Another example of a navigation device is the sound image localization type described here. This type of navigation device outputs, from a plurality of speakers arranged in a vehicle cabin, a sound associated with the target, i.e., a sound icon to allow drivers to recognize the target. For example, by controlling the sound level and the delay time of the sound that is output from each speaker, the sound image localization type navigation device locates a sound image of the sound icon on the target or near the target. Furthermore, the sound image localization type navigation device adds reverberation to the sound icon in accordance with the distance to the target. In this way, the drivers can accurately grasp the target location by using their hearing.
However, with the conventional technologies described above, as will be described below, there is a problem in that it is not possible to accurately transfer the target direction.
For example, both the remaining-distance switching type navigation devices and the vehicle-speed switching type navigation devices only transfer a sense of the distance to the target. Accordingly, even when a user grasps how far away from the target he/she is, the user may not grasp the direction of the target.
Furthermore, the sound image localization type navigation device transfers the direction of the target by locating the sound image of the sound icon on the target or near the target. However, even when the sound image of the sound icon is located on the target or near the target, a user may not perceive a slight difference between the directions; therefore, the direction of the target is roughly transferred to the user. Specifically, even if the target is located in front of a user, the user may not distinguish sounds output from the speakers indicating whether the target is located in front of the user or whether the target is located slightly away from the front of the user. Furthermore, if a user changes his or her own traveling direction, the user may not distinguish sounds, i.e., whether the target is closer to the front of the user or whether the target is shifted from the front of the user.
According to an aspect of an embodiment of the invention, a terminal device includes:a calculating unit that calculates an orientation of a device with respect to a target; a determining unit that determines a degree of processing related to an attribute of a sound that indicates the target in accordance with the orientation calculated by the calculating unit; and an output control unit that controls output of the sound in accordance with the degree of processing determined by the determining unit.
The object and advantages of the embodiment will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the embodiment, as claimed.
Preferred embodiments of the present invention will be explained with reference to accompanying drawings. The embodiments are not limited to the disclosed technology. Furthermore, the embodiments can be appropriately used in combination as long as processes do not contradict.
Specifically, when a sense of direction to the target is given to a user by using a sound, by making the degree of processing related to the attribute of a sound large as the target is shifted from the front of the terminal device 10, the terminal device 10 according to the embodiment enhances any shift from the front of the terminal device 10. Accordingly, the terminal device 10 according to the embodiment can output a guiding sound such that a user can easily perceive whether the target faces the front of the user or whether the target is moving closer to the front of the user. Therefore, the terminal device 10 according to the embodiment can accurately transfer the direction of the target.
Any information processing apparatus can be used for the terminal device 10 as long as a navigation function can be installed. The terminal device 10 can be implemented by various devices described below. Examples of the terminal device 10 include, as a mobile unit that is a terminal carried by a user, a mobile phone, a personal handyphone system (PHS), and a personal digital assistant (PDA). Another example of the terminal device 10 includes, as a terminal installed in a mobile unit in a vehicle, a navigation device. Furthermore, the terminal device 10 does not necessarily have to be a mobile terminal; a fixed terminal, such as a personal computer, can also be used.
As illustrated in
The input unit 11a is an input device (device) that receives instruction inputs related to various kinds of information. Specifically, the input unit 11a receives, via an operation performed by a user, a navigation function and the starting and the ending of the navigation function. Furthermore, the input unit 11a receives the setting of the target that is guided and is desired by a user. Various operation keys can be used for the input unit 11a. Examples of the input unit 11a include a numeric keypad (ten key) that is used to input numerals or characters, a cursor key that is used to select a menu or to scroll the screen window, or the like. Furthermore, for the input unit 11a, a touch panel integrated with the display unit 11b, which will be described later, can also be used.
The display unit 11b is a display device that displays various kinds of information. For example, the display unit 11b sets the target or displays map data so that the target can be confirmed on the screen. Examples of the display unit 11b include a monitor, a display, and a touch panel.
In the following, a description is given on the basis of the assumption that a user carrying the terminal device 10 uses the navigation function installed in the terminal device 10 to receive a guiding service to the target. Furthermore, the description is given on the basis of the assumption that, as an example of setting the target, the navigation function is started when a user operates the input unit 11a, map data is displayed on the screen by the display unit 11b, and a landmark or an intersection is set as a target. Here, a case is assumed in which a target on a map is received; however, the target on the map does not necessarily have to be received. For example, if the terminal device 10 is a communication device, such as a mobile phone or a PHS, it is also possible to automatically set, as the target, another communication device that is in a call connection with the terminal device 10. In such a case, the direction in which a person who carries the other communication device, for example, a person to be met, is located is transferred using a sound.
The location acquisition unit 12 is a processing unit that acquires the location of the terminal device 10 and the target location. To acquire the location of the terminal device 10, the location acquisition unit 12 measures, using a Global Positioning System (GPS) receiver, latitude and longitude of a point where the terminal device 10 is located. Then, the location acquisition unit 12 calculates, from the measured latitude and longitude, a coordinate location in plane rectangular coordinates and acquires the location of the terminal device 10. Furthermore, to acquire the target location, the location acquisition unit 12 acquires, as the target location, the plane rectangular coordinates of the target of the map data specified via the input unit 11a.
If the location of the terminal device 10 in the plane rectangular coordinates is acquired using the latitude and the longitude, the conventional technology described here can be used. One example of such a technology is disclosed in “TOTAL INVERSE SOLUTIONS FOR THE GEODESIC AND GREAT ELLIPTIC” B. R. Bowring Survey Review 33, 261 (July, 1996) 461-476.
The orientation acquisition unit 13 is a processing unit that acquires the orientation of the terminal device 10. For example, by using an electromagnetic compass, the orientation acquisition unit 13 acquires, as the orientation of the terminal, the direction indicated by the central vertical axis of the terminal device 10 on a horizontal plane, for example, an angle A formed by the longitudinal direction of the terminal device and the north direction (0°). Alternatively, by extracting the track of the terminal device 10 using the GPS receiver, the orientation acquisition unit 13 can acquire, as the orientation of the terminal, the angle A formed by the traveling direction of the terminal device 10 and the north direction (0°). In the two examples described above, an angle is acquired using the north direction (0°) as a reference direction; however, the direction is not limited thereto. The disclosed terminal device can use any direction as a reference direction.
In the above description, the orientation of the terminal is acquired on the basis of the assumption that the orientation of the terminal and the front-facing direction of a user are the same. However, when the terminal device 10 is used as a communication device by placing it in a user's ear, the orientation of the terminal and the front-facing direction of a user are not always the same. In such a case, any technology can be used in which the orientation of the terminal is corrected to calculate the front-facing direction of a user.
The orientation calculating unit 14 is a processing unit that calculates the orientation of the terminal with respect to the target. For example, in accordance with the location of the terminal device 10 and the target location acquired by the location acquisition unit 12, the orientation calculating unit 14 acquires the direction from the terminal device 10 to the target. Furthermore, the orientation calculating unit 14 acquires, as the site direction of the target, an angle B formed by the site direction of the target and the north direction) (0°. Then, in accordance with the angle B, which is the site direction of the target acquired in this way, and in accordance with the angle A, which is acquired as the orientation of the terminal by the orientation acquisition unit 13, the orientation calculating unit 14 acquires the orientation of the terminal device 10 with respect to the target, for example, “angle B-angle A”.
The degree-of-processing determining unit 15 is a processing unit that determines the degree of processing related to the attribute of a sound indicating the target in accordance with the orientation of the terminal with respect to the target calculated by the orientation calculating unit 14, i.e., in accordance with an orientation Φ (t) of a user with respect to the target.
An example is described of a case in which the target direction is transferred by processing the distance (r) to the sound source of the attribute of a sound. The “distance to the sound source” mentioned here indicates the distance from the location in which the terminal device 10 is located to the location in which a virtual sound source is arranged. The “degree of processing” indicates the degree of the distance to the sound source.
In the example illustrated in
In the example illustrated in
The transfer characteristic storing unit 16 is a storing unit that stores therein a transfer characteristic. To position the sound source at a given location by performing a convolution of the head-related transfer function, transfer characteristics that are previously measured for each of the left and the right ears are registered in the transfer characteristic storing unit 16. The convolution is performed by the output control unit 18, which will be described later.
The guiding sound storing unit 17 is a storing unit that stores therein a guiding sound that is used to guide a user to the target. It is possible to previously register, in the guiding sound storing unit 17, an electronic sound, for example, a “beeping sound . . . ”. Alternatively, a desired tune can be preinstalled or can be downloaded and then installed. Any kinds of sounds can be used for the guiding sound as long as a person can perceive it.
The output control unit 18 is a processing unit that controls the output of a sound indicating the target. The output is performed in accordance with the degree of processing related to the attribute of the sound determined by the degree-of-processing determining unit 15. For example, from among the transfer characteristics stored in the transfer characteristic storing unit 16, the output control unit 18 extracts the transfer characteristics closest to the distance (r) to the sound source determined by the degree-of-processing determining unit 15. Specifically, the output control unit 18 extracts a transfer characteristic HL (1, α) and a transfer characteristic HR (1, α) in which the difference between the distances to the sound source is |1−r|. Then, the output control unit 18 performs, on the transfer characteristic HL (1, α) and the transfer characteristic HR (1, α), frequency-time conversion, i.e., a Fourier transformation. Accordingly, the output control unit 18 solves a head-related transfer function of each of the left and the right ears, i.e., calculates an impulse response hrtf L (1, α, m) and hrtf R (1, α, m), where m is the length of the impulse response and m=0, . . . M−1, and M. Then, by using a finite impulse response (FIR) filter, the output control unit 18 performs convolution indicated Equation (1) and Equation (2) below. Specifically, the output control unit 18 convolves an impulse response hrtf L (1, α, m) to the left ear and an impulse response hrtf R (1, α, m) to the right ear with a guiding sound signal sig (n) extracted from the guiding sound storing unit 17. In this way, after creating the stereo signals of the output sound for the left ear INL and the stereo signals of the output sound for the right ear INR, the output control unit 18 outputs the stereo signal to the sound output unit 19.
The sound output unit 19 is a sound output device that outputs a sound signal that is output by the output control unit 18. A speaker or an earphone can be used for the sound output unit 19. Specifically, when sounds are output via speakers, the sound output unit 19 outputs, from multiple speakers, output sounds for the left ear INL from a left speaker L and an output sound for the right ear INR from a right speaker R. Furthermore, when a sound is output via earphones, the sound output unit 19 outputs an output sound for the left ear INL from a left earphone L and an output sound for the right ear INR from a right earphone R.
The terminal device 10 described above includes, for example, a semiconductor memory device, such as a random access memory (RAM) or a flash memory, which is used for various processes. Furthermore, the terminal device 10 also includes an electronic circuit, such as a central processing unit (CPU) or a micro processing unit (MPU), and executes various processes using the RAM or the flash memory. Instead of the CPU or the MPU, the terminal device 10 can include an integrated circuit, such as an application specific integrated circuit (ASIC) or a field programmable gate array (FPGA).
Flow of a Process
In the following, the flow of the terminal device according to the embodiment will be described.
As illustrated in
Then, the location acquisition unit 12 acquires the location of the terminal device 10 and the location of the target (Step S103). Thereafter, the orientation acquisition unit 13 acquires the orientation of the terminal device 10 (Step S104). Subsequently, after obtaining the site direction of the target by using the location of the terminal device 10 and using the location of the target acquired by the location acquisition unit 12, the orientation calculating unit 14 calculates, from the site direction of the target and the orientation of the terminal that is acquired by the orientation acquisition unit 13, the orientation of the terminal device 10 with respect to the target (Step S105).
Then, in accordance with the orientation of the terminal with respect to the target calculated by the orientation calculating unit 14, i.e., the orientation of a user with respect to the target, the degree-of-processing determining unit 15 determines the degree of processing related to the attribute of a sound that indicates the target (Step S106). Subsequently, in accordance with the degree of processing related to the attribute of the sound determined by the degree-of-processing determining unit 15, the output control unit 18 processes a guiding sound and allows the sound output unit 19 to output the processed guiding sound (Step S107).
Then, processes from Steps S103 to S107 are repeatedly performed until the navigation function ends (No at Step S108). Thereafter, if the navigation function ends (Yes at Step S108), the process ends. The navigation function ends when it receives, from a user via the input unit 11a, an instruction to end the function or automatically ends when the user reaches the target.
As described above, the terminal device 10 according to the embodiment calculates the orientation of the terminal device 10 with respect to the target; determines the degree of processing related to the distance to the sound source in accordance with the calculated orientation; and controls the output of the sound in accordance with the degree of processing. Accordingly, a user can perceive whether he or she faces the target without perceiving the slight difference between the distances to the sound source. Accordingly, the terminal device 10 according to the embodiment can accurately transfer the direction of the target.
Furthermore, as the orientation of the front of the terminal device 10 is shifted with respect to the target, the terminal device 10 according to the embodiment increases the degree of processing related to the distance to the sound source. Accordingly, as the target becomes shifted from the front of the terminal device 10, the terminal device 10 according to the embodiment can enhance the shift from the front of the terminal device 10 by making the degree of processing related to the distance to the sound source. Accordingly, the terminal device 10 according to the embodiment can output a guiding sound such that a user can easily perceive whether the target faces the front of the user or whether the target is moving closer to the front of the user. Accordingly, the terminal device 10 according to the embodiment can further accurately transfer the direction of the target.
In the first embodiment described above, a case has been described in which the direction of the target is transferred by processing the distance (r) to the sound source from among the attributes of the sound; however, another attribute of a sound can also be used. Accordingly, in a second embodiment, a case will be described in which the direction of the target is transferred by processing the direction (θ) of the sound source from among the attributes of the sound.
In the second embodiment, the functioning units other than the degree-of-processing determining unit 15 and the output control unit 18 illustrated in
The degree-of-processing determining unit 21 determines the degree of processing related to the direction of the sound source in accordance with the orientation of the terminal with respect to the target, i.e., the orientation of a user with respect to the target, calculated by the orientation calculating unit 14. The “direction of the sound source” mentioned here indicates the direction of a virtual sound source to be arranged. The “degree of processing” indicates the degree of the direction of the sound source.
In the example illustrated in
In the example illustrated in
The output control unit 22 creates a stereo signal by convolving an impulse response that is used to position the direction (θ) of the sound source determined by the degree-of-processing determining unit 21 with the guiding sound stored in the guiding sound storing unit 17. Specifically, from among the transfer characteristics stored in the transfer characteristic storing unit 16, the output control unit 22 extracts transfer characteristics closest to the direction (θ) of the sound source determined by the degree-of-processing determining unit 21. More specifically, the output control unit 22 extracts a transfer characteristic HL (1, α) and a transfer characteristic HR (1, α) in which the difference between the distances of the sound source is |α−θ|. Then, the output control unit 22 performs a Fourier transformation on the transfer characteristic HL (1, α) and the transfer characteristic HR (1, α). Accordingly, the output control unit 22 calculates the impulse response hrtf L (1, α, m) for the left ear and the impulse response hrtf R (1, α, m) for the right ear. Then, by using the FIR filter, the output control unit 22 performs a convolution represented by Equation (1) and Equation (2). Specifically, the output control unit 22 convolves the impulse response hrtf L (1, α, m) for the left ear and the impulse response hrtf R (1, α, m) for the right ear with the guiding sound signal sig (n) extracted from the guiding sound storing unit 17. In this way, after creating the stereo signals of the output sound for the left ear INL and the output sound for the right ear INR, the output control unit 22 outputs the stereo signal to the sound output unit 19.
As described above, the terminal device 20 according to the second embodiment transfers the direction of the target by processing the direction (θ) of the sound source from among the attributes of the sound. Accordingly, in a similar manner as in the first embodiment, a user can perceive whether the user faces the target without perceiving the slight difference between the directions of the sound source. Accordingly, with the terminal device 20 according to the second embodiment, it is possible to accurately transfer the direction of the target.
Furthermore, if the orientation of the front of a user with respect to the target is within a predetermined range, as the orientation of the front of the user is shifted within the predetermined range, the terminal device 20 according to the second embodiment increases the degree of processing compared with that in the other ranges. Accordingly, even when the target substantially faces the user, the sound source is arranged at a location shifted from the front of the user and thus the sound source is not arranged in front of the user as long as it does not face the user. Accordingly, a user can easily perceive that he or she faces the target. Therefore, the terminal device 20 according to the second embodiment effectively helps a user to face the target.
In the second embodiment, a case has been described in which the direction of the target is transferred by processing the direction (θ) of a sound source from among the attributes of the sound; however, another attribute of a sound can also be used. Accordingly, in a third embodiment, a case will be described in which the direction of the target is transferred by processing a sound volume (V) from among the attributes of a sound.
In the third embodiment, the functioning units other than the degree-of-processing determining unit 15 and the output control unit 18 illustrated in
The degree-of-processing determining unit 31 determines the degree of processing related to a sound volume (V) in accordance with the orientation of the terminal with respect to the target calculated by the orientation calculating unit 14. The “degree of processing” mentioned here indicates a control level of the ratio (%) of an output volume with respect to the maximum volume Vmax.
In the example illustrated in
The output control unit 32 changes a guiding sound volume stored in the guiding sound storing unit 17 in accordance with the ratio of the output volume determined by the degree-of-processing determining unit 31. Specifically, the output control unit 32 attenuates a sound signal of a guiding sound in accordance with a calculation equation “es (t)=s(t)×v/100” for the output volume and outputs the attenuated sound signal of the guiding sound to the sound output unit 19. The symbol “es (t)” mentioned here indicates a processed guiding sound sample. Furthermore, the symbol “s (t)” indicates a pre-processed guiding sound sample.
As described above, the terminal device 30 according to the third embodiment transfers the direction of the target by processing a sound volume (V) from among the attributes of the sound. Accordingly, in a similar manner as in the first embodiment, a user can perceive whether he or she faces the target without perceiving the slight difference between the sound volumes. Accordingly, the terminal device 30 according to the third embodiment can accurately transfer the direction of the target. Furthermore, with the terminal device 30 according to the third embodiment, the volume of a sound can be processed without using the head-related transfer function; therefore, the terminal device 30 can be preferably used not only when a stereo output is used but also when a monophonic output is used.
In the third embodiment, a case has been described in which the direction of the target is transferred by processing the sound volume (V) from among the attributes of the sound; however, another attribute of a sound can also be used. Accordingly, in a fourth embodiment, a case will be described in which the direction of the target is transferred by processing the pitch (P) of a sound from among the attributes of the sound.
In the fourth embodiment, the functioning units other than the degree-of-processing determining unit 15 and the output control unit 18 illustrated in
The degree-of-processing determining unit 41 determines the degree of processing related to the pitch (P) of a sound in accordance with the orientation of the terminal with respect to the target, i.e., the orientation of a user with respect to the target, calculated by the orientation calculating unit 14. The “degree of processing” mentioned here indicates a control level of the ratio (%) of on output pitch with respect to the maximum pitch Pmax. The maximum pitch Pmax is assumed to be a tone of the original sound.
In the example illustrated in
The output control unit 42 changes, in accordance with the output pitch determined by the degree-of-processing determining unit 41, the pitch of the guiding sound stored in the guiding sound storing unit 17. Specifically, the output control unit 42 acquires a frequency component by performing, on a sound signal of the guiding sound stored in the guiding sound storing unit 17, a time frequency conversion, i.e., an inverse Fourier transformation. Such a frequency component is represented as a complex number for each frequency (Hz). In the following, a description will be given by representing the number of divisions of a frequency component divided by a predetermined bandwidth (hereinafter, referred to as the “number of bandwidth divisions”) as N and by representing the kth (k=0, . . . , N−1) bandwidth frequency component as S (k). By using a ROUND function that outputs an integer by rounding off decimals, the output control unit 42 decreases the frequency component of the guiding sound by the value of the ratio p (Hz) of the output pitch. Specifically, by using “j=round(p/Δf)” and “Δf·f(k)−f(k−1)”, the output control unit 42 calculates S′(k)=S(k+j)0 for “k=0, N−j−1” and calculates S′(k)=0 for “k=N−j, . . . , N−1”. Then, by performing a frequency-time conversion on S′ (k), the output control unit 42 creates a sound signal of a guiding sound whose frequency component decreases by the value of the ratio p (Hz) of the output pitch.
As described above, the terminal device 40 according to the fourth embodiment transfers the direction of the target by processing the pitch (P) of a sound from among the attributes of the sound. Accordingly, in a similar manner as in the first embodiment, a user can perceive whether he or she faces the target without perceiving the slight difference between the pitches of the sound. Accordingly, the terminal device 40 according to the fourth embodiment can accurately transfer the direction of the target. Furthermore, the terminal device 40 according to the fourth embodiment can processes the pitch without using the head-related transfer function; therefore, the terminal device 40 can be preferably used not only when a stereo output is used but also when a monophonic output is used.
In the fourth embodiment, a case has been described in which the direction of the target is transferred by processing the pitch (P) of a sound from among the attributes of the sound; however, another attribute of a sound can also be used. Accordingly, in a fifth embodiment, a case will be described in which the direction of the target is transferred by processing the tempo (T) of a sound from among the attributes of the sound.
In the fifth embodiment, the functioning units other than the degree-of-processing determining unit 15 and the output control unit 18 illustrated in
The degree-of-processing determining unit 51 determines the degree of processing related to the tempo (T) of a sound in accordance with the orientation of the terminal with respect to the target, i.e., the orientation of a user with respect to the target, calculated by the orientation calculating unit 14. The “degree of processing” mentioned here indicates a control level of the ratio (%) of an output tempo with respect to the maximum tempo T max. The maximum tempo T max is assumed to be the tempo of the original sound.
In the example illustrated in
The output control unit 52 changes the tempo of the guiding sound stored in the guiding sound storing unit 17 in accordance with the ratio of the output tempo determined by the degree-of-processing determining unit 51. Specifically, by applying the following conventional technology to the output control unit 52, it is possible to change the tempo of the sound signal of the guiding sound. An example of the conventional technology is disclosed in the following publication: Sadaoki Furui, “speech information processing-electronic information communication engineering series” Morikita publishing Co., Ltd., in which a time-domain harmonic scaling (TDHS) is described as a method of converting a tempo using a signal waveform.
As described above, the terminal device 50 according to the fifth embodiment transfers the direction of the target by processing the tempo (T) of the sound from among the attributes of the sound. Accordingly, in a similar manner as in the first embodiment, a user can perceive whether he or she faces the target without perceiving the slight difference between the tempos of the sound. Accordingly, the terminal device 50 according to the fifth embodiment can accurately transfer the direction of the target. Furthermore, the terminal device 50 according to the fifth embodiment can process the tempo of the sound without using the head-related transfer function; therefore, the terminal device 50 can be preferably used not only when a stereo output is used but also when a monophonic output is used.
In the fifth embodiment, a case has been described in which the direction of the target is transferred by processing the tempo (T) of a sound from among the attributes of the sound; however, another attribute of a sound can also be used. Accordingly, in the sixth embodiment, a case will be described in which the direction of the target is transferred by processing the frequency characteristic of a sound from among the attributes of the sound.
In the sixth embodiment, the functioning units other than the degree-of-processing determining unit 15 and the output control unit 18 illustrated in
The degree-of-processing determining unit 61 determines the degree of processing related to the frequency characteristic (T) of a sound in accordance with the orientation of the terminal with respect to the target, i.e., the orientation of the user with respect to the target, calculated by the orientation calculating unit 14.
The “degree of processing” mentioned here indicates the control level of the ratio (%) of a supplied gain with respect to the maximum gain C max that is applied to the frequency component of the guiding sound.
In the example illustrated in
The output control unit 62 changes, in accordance with the ratio of the supplied gain determined by the degree-of-processing determining unit 61, the frequency characteristic of the guiding sound stored in the guiding sound storing unit 17. Specifically, the output control unit 62 calculates a supplied gain e (f) applied to the sound signal of the guiding sound in accordance with the calculation equation “e(f)=10̂((c(Φ)*g(f))/100/20” for the supplied gain. The symbol “g (f)” included in the calculation equation indicates the maximum gain Cmax illustrated in
As described above, the terminal device 60 according to the sixth embodiment transfers the direction of the target by processing the frequency characteristic of the sound from among the attributes of the sound. Accordingly, in a similar manner as in the first embodiment, a user can perceive whether he or she faces the target without perceiving the slight difference between the tempos of the sound. Accordingly, the terminal device 60 according to the sixth embodiment can accurately transfer the direction of the target. Furthermore, the terminal device 60 according to the sixth embodiment can process the frequency characteristic without using the head-related transfer function; therefore, the terminal device 60 can be preferably used not only when a stereo output is used but also when a monophonic output is used.
In the sixth embodiment, a case has been described in which, by applying a gain to each frequency component of the guiding sound, the frequency characteristic of the sound is processed such that a user can easily perceive the sound; however, the device disclosed in the present invention is not limited thereto. For example, by applying a gain to each frequency component of a guiding sound, the frequency characteristic of the sound can also be processed such that a user hardly perceive the sound.
As illustrated in
When controlling such a gradient, the degree-of-processing determining unit 61 determines an inclination control level of a supplied gain to be applied to the frequency characteristic (T) of the sound in accordance with the orientation of the terminal with respect to the target, i.e., the orientation of the user with respect to the target, calculated by the orientation calculating unit 14.
In the example illustrated in
Then, the output control unit 62 changes the frequency characteristic of the guiding sound stored in the guiding sound storing unit 17 in accordance with the inclination control level A of the supplied gain determined by the degree-of-processing determining unit 61. For example, the output control unit 62 calculates a supplied gain e (f) to be supplied to the sound signal of the guiding sound in accordance with the calculation equation “e(f)=10̂((f*A)/100/20)” for the supplied gain. Thereafter, the output control unit 62 performs the same processes as those described in the sixth embodiment.
As described above, the terminal device 60 according to the application example transfers the direction of the target by processing the frequency characteristic of the sound from among the attributes of the sound. Accordingly, in a similar manner as in the sixth embodiment, a user can perceive whether he or she faces the target without perceiving the slight difference between the frequency characteristics of the sound. Accordingly, the terminal device 60 according to the application example can accurately transfer the direction of the target. Furthermore, the terminal device 60 according to the application example can process the frequency characteristic without using the head-related transfer function; therefore, the terminal device 60 can be preferably used not only when a stereo output is used but also when a monophonic output is used.
In the sixth embodiment, a case has been described in which the direction of the target is transferred by processing the frequency characteristic of the sound from among the attributes of the sound; however, another attribute of a sound can also be used. Accordingly, in a seventh embodiment, a case will be described in which the direction of the target is transferred by processing the bandwidth of the sound from among the attributes of the sound.
In the seventh embodiment, the functioning units other than the degree-of-processing determining unit 15 and the output control unit 18 illustrated in
The degree-of-processing determining unit 71 determines the degree of processing of the bandwidth (W) of the sound in accordance with the orientation of the terminal with respect to the target, i.e., the orientation of a user with respect to the target, calculated by the orientation calculating unit 14. The “degree of processing” mentioned here indicates the control level of the ratio (%) of an output bandwidth with respect to the maximum bandwidth Wmax. The maximum bandwidth Wmax is assumed to be the bandwidth of the original sound.
In the example illustrated in
The output control unit 72 changes the bandwidth of the guiding sound stored in the guiding sound storing unit 17 in accordance with the ratio of the output bandwidth determined by the degree-of-processing determining unit 71. Specifically, the output control unit 72 obtains a frequency component by performing a time frequency conversion on the sound signal of the guiding sound stored in the guiding sound storing unit 17. Such a frequency component is represented as a complex number for each frequency (Hz). In the following, a description will be given by representing the number of divisions of a frequency component divided by a predetermined bandwidth (hereinafter, referred to as the “number of bandwidth divisions”) as N and by representing the kth (k=0, . . . , N−1) bandwidth frequency component as S (k). By using a ROUND function that outputs an integer by rounding off decimals, the output control unit 72 thins out a part of the frequency component in accordance with the ratio of the output bandwidth of the frequency component of the guiding sound. Specifically, by using “q=round(N*w/100)”, the output control unit 72 calculates S′″ (k)=S(k) 0 for “k=0, . . . , q−1” and calculates S′″ (k)=0 for “k=q, . . . , N−1”. By doing so, from among the frequency components of the original sound, the frequency components of “k=q, . . . , N−1” are thinned out. Then, by performing the frequency-time conversion on S′″(k), the output control unit 72 creates a sound signal of the guiding sound in which the frequency components are thinned out to the ratio (w) of the output bandwidth.
As described above, the terminal device 70 according to the seventh embodiment transfers the direction of the target by processing the bandwidth (W) of the sound from among the attributes of the sound. Accordingly, in a similar manner as in the first embodiment, a user can perceive whether he or she faces the target without perceiving the slight difference between the bandwidths of the sound. Accordingly, the terminal device 70 according to the seventh embodiment can accurately transfer the direction of the target. Furthermore, the terminal device 70 according to the seventh embodiment can process the bandwidth of the sound without using the head-related transfer function; therefore, the terminal device 70 can be preferably used not only when a stereo output is used but also when a monophonic output is used.
In the seventh embodiment, a case has been described in which the direction of the target is transferred by processing the bandwidth of the sound from among the attributes of the sound; however, another attribute of a sound can also be used. Accordingly, in an eighth embodiment, a case will be described in which the direction of the target is transferred by processing the signal to noise ratio (SNR) of the sound from among the attributes of the sound.
In the eighth embodiment, the functioning units other than the degree-of-processing determining unit 15 and the output control unit 18 illustrated in
The degree-of-processing determining unit 81 determines the degree of processing related to the SNR of the sound in accordance with the orientation of the terminal with respect to the target, i.e., the orientation of a user with respect to the target, calculated by the orientation calculating unit 14. The “degree of processing” mentioned here indicates the control level of the ratio (%) of the SNR of an output signal with respect to the maximum SNRmax. The maximum SNRmax is assumed to be the SNR of the original sound.
In the example illustrated in
The output control unit 82 superimposes white noise on the guiding sound stored in the guiding sound storing unit 17 in accordance with the ratio of the SNR of the output signal determined by the degree-of-processing determining unit 81. The white noise mentioned here is noise whose power is uniform in the entire frequency bandwidth of which amplitude components have a normal distribution.
Specifically, the output control unit 82 calculates, using Equation (3) described below, the magnitude S1 (dB) of the sound signal of the guiding sound stored in the guiding sound storing unit 17. The symbol “Q” in Equation (3) represents the number of frame samples. The output control unit 82 creates white noise w (t) by using a conventional technology for creating random numbers having a normal distribution. Then, by using Equation (4) below, the output control unit 82 adjusts the magnitude of the white noise such that it becomes SNR (Φ) of the output signal determined by the degree-of-processing determining unit 81. In Equation (4), “w (t)” represents the sample of the sound signal of white noise that has not been processed and “w′ (t)” represents the sample of the sound signal of white noise that has been processed. Thereafter, the output control unit 62 superimposes the processed white noise w′ (t) on the sound signal s (t) of the guiding sound so that it outputs an output signal w′ (t).
As described above, the terminal device 80 according to the eighth embodiment transfers the direction of the target by processing the SNR of the sound from among the attributes of the sound. Accordingly, in a similar manner as in the first embodiment, a user can perceive whether he or she faces the target without perceiving the slight difference between the SNRs of the sound. Accordingly, the terminal device 80 according to the eighth embodiment can accurately transfer the direction of the target. Furthermore, the terminal device 80 according to the eighth embodiment can process the SNR without using the head-related transfer function; therefore, the terminal device 80 can be preferably used not only when a stereo output is used but also when a monophonic output is used.
In the above explanation, the embodiments of the present invention have been described; however, the present invention can be implemented with various kinds of embodiments other than the embodiments described above. Therefore, another embodiment included in the present invention will be described below.
For example, in the first to eighth embodiments, each embodiment is separately implemented; however, it is also possible to implement two or more embodiments in combination from among the embodiments. Specifically, the disclosed device can determine the degree of processing related to the attributes by using at least one of or a combination of any of the distance, the direction, the volume, the pitch, the tempo, the frequency characteristic, the bandwidth, or the SNR of the sound. By doing so, the disclosed device can create, in a multifaceted manner, a guiding sound such that a user can easily perceive whether the target faces the front of the user or whether the target is moving closer to the front of the user. Accordingly, the disclosed device can accurately transfer the direction of the target.
The components of each device illustrated in the drawings are not necessarily physically configured as illustrated in the drawings. In other words, the specific shape of a separate or integrated device is not limited to the drawings; thus, all or part of the device can be configured by functionally or physically separating or integrating any of the units depending on various loads or use conditions. For example, the location acquisition unit 12, the orientation acquisition unit 13, the orientation calculating unit 14, the degree-of-processing determining unit 15, and the output control unit 18 can be connected via a network as external units of the terminal device. Furthermore, it is also possible to implement the function of the terminal device by allowing other devices to have the location acquisition unit 12, the orientation acquisition unit 13, the orientation calculating unit 14, the degree-of-processing determining unit 15, and the output control unit 18 and by allowing these units to be connected via a network and to cooperate with each other. Furthermore, it is also possible to implement the function of the terminal device by allowing other devices to have all or part of the transfer characteristic storing unit 16 or the guiding sound storing unit 17; to be connected to a network; and to cooperate with each other.
In the following, an example of hardware configuration of the terminal device according to the first embodiment will be described with reference to
From among the above devices, the wireless communication unit 120, the display unit 130, the sound input/output unit 140, the input unit 150, and the storing unit 160 are connected to the processor 170. Furthermore, the antenna 110 is connected to the wireless communication unit 120. Furthermore, the microphone 140a and the speaker 140b are connected to the sound input/output unit 140.
Although not illustrated in
The storing unit 160 and the processor 170 implement the function performed by, for example, the location acquisition unit 12, the orientation acquisition unit 13, the orientation calculating unit 14, the degree-of-processing determining unit 15, the transfer characteristic storing unit 16, the guiding sound storing unit 17, and the output control unit 18 illustrated in
Furthermore, the navigation programs are not necessarily stored in the storing unit 160 from the beginning. For example, each program is stored in a “portable physical medium”, such as a memory card inserted into the terminal device 100. Then, the terminal device 100 can be configured such that it obtains each program from the portable physical medium and executes the programs. Alternatively, each program is stored in, for example, another computer or a server device that is connected to the terminal device 100 via a public circuit, the Internet, a LAN, a WAN, or the like. Then the terminal device 100 obtains each program from the other computer or the server device and executes the programs.
According to the terminal device disclosed in the present invention target, an advantage is provided in that the direction of the target is accurately transferred.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although the embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2010-212825 | Sep 2010 | JP | national |