1. Field of the Invention
The present invention relates to a sound generating method, a sound generating apparatus, a sound reproducing method and a sound reproducing apparatus that are capable of generating and reproducing left-and-right and up-and-down sound signals relating to a video signal.
2. Description of Related Art
In recent years, a home TV (television) display apparatus increases a display size by reducing thickness and increasing flatness, which leads to an increase in whole apparatus size in not only a horizontal direction but also a vertical (height) direction.
A related art general TV is adapted to give voices or sounds through a reproducing apparatus such as speakers equipped at left and right sides of a display, irrespectively of an increase in display size, so that a stereophonic 2-channel reproduction has been often applied.
Further, in recent years, there is known a multi-channel surround reproduction technology that enables a reproduction to be as wide as 360 degrees with a DVD (Digital Versatile Disc) software etc. However, this technology also is adapted to reproduce a sound image located in the horizontal direction of a display using a plurality of speakers in most cases. Thus, there has not yet been provided an apparatus reproducing a sound field in the vertical direction to match the display.
By the way, the present applicant has previously proposed a video camera that performs a multichannel recording/reproduction of an audio input omni-directionally from a sound field space, together with a video (See the above Patent document 1). The technology of the above video camera enables an audio-video recording/reproduction to support the surround reproduction technology, in which case; however, a problem arises in which the above video camera technology has no ability to record and reproduce the sound field in the vertical direction of the display.
As described above, the display of the home TV display apparatus, etc. is increasing in size, which gives rise to a problem in which a technology of generating a horizontal sound field, such as a stereophonic sound field or an omni-directional surround sound field like the related art technology has difficulty in attaining a feeling of presence fitted to an image on the display.
The present invention has been undertaken in view of the above problems and is intended to provide, for adapting an increase in display size, a sound generating method and a sound generating apparatus that are capable of generating a sound field giving a richer feeling of presence to match a left-and-right direction and an up-and-down direction of a display.
Further, the present invention is also intended to provide, for adapting the increase in display size, a sound reproducing method and a sound reproducing apparatus that are capable of reproducing a sound field giving a richer feeling of presence to match the left-and-right and the up-and-down directions of the display.
To solve the above problems, the present invention provides a sound generating method of generating sound signals related to a video signal, and it is characterized by generating independently each of the sound signals matched to a horizontal direction and a vertical direction of a video, thereby permitting the horizontal and the vertical sound signals that have been generated to be reproduced independently with horizontal sound output means and vertical sound output means, respectively.
Further, a sound generating apparatus of the present invention is a sound generating apparatus for generating sound signals related to a video signal, and it comprises horizontal sound generating means for generating a sound signal matched to a horizontal direction of a video, vertical sound generating means for generating a sound signal matched to a vertical direction of the video, and directivity generating means for varying a directivity characteristic of each of the horizontal and the vertical sound generating means.
Meanwhile, a sound reproducing method of the present invention is a sound reproducing method of reproducing sound signals related to a video signal, and it is characterized by reproducing independently, with horizontal sound output means and vertical sound output means that are arranged to surround a vicinity of a display serving to display a video, a horizontal sound signal and a vertical sound signal that have been generated to match a horizontal direction and a vertical direction of the video, respectively.
Further, a sound reproducing apparatus of the present invention is a sound reproducing apparatus for reproducing sound signals related to a video signal, and it comprises a display screen serving to display a video, and horizontal sound output means and vertical sound output means that are arranged to surround a vicinity of the display and in which a horizontal sound signal and a vertical sound signal that have been generated to match a horizontal direction and a vertical direction of the video are reproduced independently with the horizontal and the vertical sound output means, respectively.
According to the present invention, each of the sound signals matched to the horizontal and vertical directions of the video is generated independently, and the generated horizontal and vertical sound signals are reproduced independently with the horizontal and vertical sound output means respectively, so that with the increase in video display size, one approach to further add an up-and-down (vertical) sound field to the related art technology of generating the left-and-right (horizontal) sound field ensures that an up-and-down motion of an object is given clearly and distinctly, and the object image may be matched to a sound source image direction through a spatial vector synthesis of the sounds from the up-and-down and the left-and-right directions, thereby enabling a more realistic stereoscopic sound field to be reproduced for providing a video full of the feeling of presence for a viewer. Further, the present invention is applicable not only to the video camera but also a purpose of games, etc., in which case, the same effect also may be obtained by generating the sound fitted to a video motion resulting from a synthesis with computer graphics.
A technology of generating the sound images not only in the horizontal direction but also in the vertical (height) direction with the increase in TV display size as described above offers merits as follows:
1. The up-and-down motion of the sound image is given clearly and distinctly. For instance, a sound originating from scenes of takeoff or landing of an airplane, or a moving action of pleasure instruments such as a slide or a roller coaster involving an up-and-down movement, or fireworks, etc. are given clearly and distinctly;
2. It is possible to overcome a problem that arises with the increase in display size, that is, a mismatch of an image with the sound image depending on vertical positions of left and right speakers; and
3. Lens view angle information of an image capturing system may be acquired to fit the sound image more accurately to a position of the sound given from the image, so that a sound field close to reality may be created, like a case where in a speaking scene of a person, the sound image is localized in an image position of “a mouth” of the speaking person.
While the display 1 involves an application of a wide-screen thin-type flat display, such as a liquid crystal display, a plasma display and an organic electroluminescence display, it is to be understood that a CRT (Cathode-Ray Tube) and a small-sized display are also applicable as a matter of course.
The speaker 2 serves to reproduce a left (L)-channel sound field, and the speaker 3 serves to reproduce a right (R)-channel sound field. These speakers 2 and 3 are adapted to reproduce a left-and-right (horizontal) sound field. Further, the speaker 4 serves to reproduce an up (U)-channel sound field, and the speaker 5 serves to reproduce a down (D)-channel sound field. These speakers 4 and 5 are adapted to reproduce an up-and-down (vertical) sound field. It is noted that these speakers 2 to 5 are supposed to configure “horizontal sound output means” and “vertical sound output means” of the present invention.
The sound field reproduced through each of the speakers 2 to 5 is generated with a sound generating apparatus described later. The sound generating apparatus is operative to generate, with a plurality of microphones, the left-and-right and the up-and-down sound fields to be in correspondence with a video sound, so that each of the generated sound fields is reproduced independently through the speakers 2 to 5. For instance, the sound generating apparatus picks up each of the L-channel, the R-channel, the U-channel and the D-channel sound fields independently with the microphones for the respective channels to reproduce the picked-up sound fields with the corresponding channel speakers.
As described above, the sound reproducing apparatus 100 of the embodiment of the present invention provides a surround effect giving a feeling of presence to a viewer by reproducing, with the speakers 2 to 5, the left-and-right and the up-and-down sound fields in correspondence with the video displayed on the display 1, thereby enabling the reproduction of a stereoscopic sound field that has been given much more reality.
It is noted that, the speakers are not limited in arrangement to the one embodiment shown in
A sound generating apparatus 101 in one embodiment of the present invention is now described.
Firstly, a video signal supplied from an image pickup element 11, such as a charge coupled device (CCD), etc., functioning as “image capturing means” of the present invention is inputted to a recording-system audio-video encoding processor 13 through a prescribed image conversion processing given with a camera-system signal processor 12. Meanwhile, audio signals supplied from microphones 17 and 18 are converted with a microphone directivity generating processor 19 into each directivity audio signal, which is then inputted to the recording-system audio-video encoding processor 13 for encoding into a prescribed recording stream signal together with the video signal. Then, the recording stream signal is recorded in a recording/reproducing means 15, such as video disc and videotape, through switching of a schematically shown switch 14 to a recording mode position.
Details of a zoom lens 10 and a zoom position signal will be described later.
Further, in a reproduction mode, the switch 14 is switched to a reproduction mode position to input a reproduced stream signal from the recording/reproducing means 15 to a reproducing-system audio-video decoding processor 21. Then, a decoded video signal is outputted to the display 1, while a decoded audio signal is outputted through a plurality of amplifiers 22 to the speakers 2 to 5 (or 6 to 9) arranged as shown in
The microphones 17, 18 and the microphone directivity generating processor 19 are now described in detail.
One microphone 17 functions as a “horizontal sound generating means” of the present invention, and it is a microphone for generating directivity in a direction that is coincident with the horizontal direction of the image capturing element 11. The other microphone 18 functions as a “vertical sound generating means” of the present invention, and it is a microphone for generating directivity in a direction that is coincident with the vertical direction of the image capturing element 11. While the embodiment of the present invention is described in relation to an array microphone taken as one method to generate a directivity signal in each of the horizontal and the vertical directions, it is to be understood that other methods, such as the use of a microphone, etc., having a cardioid characteristic and super directivity are also available.
These microphones 17 and 18 may be mounted, for instance, on a casing panel at a back surface side of a display panel of the video camera in a cross shape or a T-like shape, etc. It is noted that the microphones 17 and 18 may be mounted in a X-like shape so as to give horizontal and the vertical directivities to the microphones respectively. In this case, the directivity signals adapted to the speakers 6 to 9 arranged as shown in
Thus, in the embodiment of the present invention, as shown in
It is noted that it is not always necessary to set the directivities of the microphones 17 and 18 to be varied to match the view angle given at the time of zooming as described the above. For instance, it does not matter if the directivities of the microphones 17 and 18 may be prefixed at all times in a wide angle-side position. In this case, a maximum feeling of presence is supposed to be obtainable at all times in the up-and-down and the left-and-right directions, irrespectively of the zooming.
Each of the microphones 31 to 34 is linearly arranged at a distance d. Then, outputs from the microphones 31, 32 and 33 are inputted to an adder 38 through delay units 35, 36 and 37, respectively. The adder 38 serves to add and output all the outputs from the delay units 35 to 37 and the output from the microphone 34 together. The delay unit 35 gives a delay 3T to the microphone output, the delay unit 36 gives a delay 2T to the microphone output, and the delay unit 37 gives a delay T to the microphone output.
Now assuming that inputs of sine waves each having an amplitude A are given from a sound source SA placed at a position being sufficiently remote from the distance d and also being approximately equally away from each of the microphones 31 to 34, the respective microphone outputs all result in A sin ωt. Further, the above outputs are given the respective delays in the delay units 35 to 37 and are then added in the adder 38. Thus, in the adder 38, the respective inputs, having been given delay differences T, are added as a result.
By the way, a resultant wave obtained in a case where two sine waves each having the delay difference T were added is shown in a following expression (1), where the amplitude A is specified as 1, for the sake of simplification.
sin ωt+sin ω(t−T)=2 cos(πfT)·sin(ωt−πfT) (1)
As shown in
Meanwhile, a case shown in
By the way, the amplitude obtained in the case where the two sine waves were added at the same phase results in a two-fold amplitude in the whole frequency band, as shown by a broken line in
As described above, the array microphones shown in
By the way, in the array microphones 17 and 18, it is necessary to set, in the microphone directivity generation processing unit 19, delays that are the most suitable to the delay units shown in
The directivity generation processing circuit 40 has variable delay units 41, 42, 43, and 44, a directional angle/delay conversion operating unit 45, and an adder 46. Each of the microphones 31 to 34 is linearly arranged at the distance d, respectively. Outputs from the microphones 31 to 34 are supplied to the variable delay units 41 to 44, respectively. After a delay processing, as described later, is given to output signals of the microphones 31 to 34 in the variable delay units 41 to 44, the output signals are all added and outputted in the adder 46.
The variable delay units 41 to 44 are configured such that a delay amount of each of the variable delay units is set independently with the directional angle/delay conversion operating unit 45. The directional angle/delay conversion operating unit 45 performs, upon a reception of the zoom position signal from the zoom lens 10, a conversion from a directional angle signal calculated on the basis of the given zoom position signal into the delay amount that is the most suitable to each of the variable delay units 41 to 44. It is noted that when the directional angle is fixed in the prescribed position without being set to be variable with a zooming operation, the directional angle/delay conversion operating unit 45 is supposed to fix the delay amounts of the variable delay units 41 to 44 to a prescribed value.
The directional angle/delay conversion operating unit 45 is now described in detail with reference to
An angle in a front direction of the microphone is specified as 0° in a plane including all the linearly arranged microphones 31 to 34.
In
T1=(3d·sin θ)/c
T2=(2d·sin θ)/c
T3=(d·sin θ)/c
T4=0
Likewise, in
T1=0
T2=(d·sin θ)/c
T3=(2d·sin θ)/c
T4=(3d·sin θ)/c
For instance, if the inter-microphone distance d is assumed to be 10 mm at room temperature, the delay amounts T1 to T4 supposed to be set as typical directional angles θ (90°, 60°, 30°, 0°, −30°, −60°, −90°) are given as shown in
Thus, in the array microphone configured as described above, if the delay amounts are set as described above, it is possible to obtain directivity for the arbitrary directional angle θ. If two sets of directivity generation processing circuits 40 of
A configuration example of the microphone directivity generating processor 19 having been described with reference to
The array microphone 17 is composed of a plurality of microphones horizontally arranged in the form of an array, and output signals from the microphones are respectively inputted to a R-channel variable delay unit 52 and a L-channel variable delay unit 53, and they are then given the delay amounts by a horizontal directional angle calculating unit 54 so as to provide a directional angle matched to a captured image view angle. The horizontal directional angle calculating unit 54 ensures that the directional angle matched to the zooming depending on the zoom position signal from the zoom lens 10 can be varied. Then, the signals respectively having been given the delay processing are added in adders 58 and 59, and they are then outputted as a R-channel output 63 and a L-channel output 64.
Likewise, the array microphone 18 is composed of a plurality of microphones vertically arranged in the form of the array, and the output signals from the microphones are respectively inputted to an U-channel variable delay unit 56 and a D-channel variable delay unit 57, and they are then given the delay amounts by a vertical directional angle calculating unit 55 so as to provide the directional angle matched to the captured image view angle. The vertical directional angle calculating unit 55 ensures that the directional angle matched to the zooming depending on the zoom position signal from the zoom lens 10 can be varied. Then, the signals respectively having been given the delay processing are added in adders 61 and 62, and they are then outputted as an U-channel output 65 and a D-channel output 66.
The R-channel, the L-channel, the U-channel, and the D-channel outputs 63 to 66 generated as described above result in left-and-right and up-and-down sound signals, relating to a video signal, that have been picked up from each of the directivity directions B, A, C and D shown in
Further, in the embodiment of the present invention, the array microphones 17 and 18 are adopted as the horizontal and vertical sound generating means, so that the use of the array microphones in combination with the microphone directivity generating processor 19 ensures that an optimum directivity may be easily generated by selecting the directivity direction depending on the delay amount, and also that the directivity characteristic may be optimized depending on the number of microphones, thereby enabling the directivity to be changed relatively freely.
In the foregoing, while the embodiment of the present invention has been described, it is to be understood that the present invention is of course not limited to the above embodiment, and various modifications may be made on the basis of a technical concept of the present invention.
For instance, while the above embodiment of the present invention is adapted to reproduce the horizontal and vertical sound fields related to the video signal using the speakers 2 to 5 (or 6 to 9) arranged to surround the display 1 or the vicinity thereof, it is also allowable to apply, in addition to the above, an omni-directional surround system to the present invention.
For instance, a stereoscopic sound field reproduction system in
Further,
The use of the above stereoscopic sound reproduction system enables the sound signals supporting a surround sound system, such as the 5.1-channel surround system, to be easily obtained, in which case, the combination of the surround sound field with the sound field matched to the direction of the object on the display according the present invention may provide the richer feeling of presence for the viewer. It is noted that, in a case of picking up a multi-channel signal as described above with the microphones mounted in the video camera, etc., a directional microphone may be directed to each directivity direction to pick up the multi-channel signal, or alternatively, the array microphone may be combined with a surround microphone. Furthermore, an available audio format serving to record the multi-channel signal given from each direction includes a MPEG2/AAC (Advanced Audio Coding) method, etc. supposed to support up to a 7.1 channel.
While the above embodiment of the present invention has been described is the embodiments respectively including the four speakers 2 to 5 or 6 to 9 arranged around the display 1 (See
For instance,
Meanwhile, as further different embodiments of the present invention, these multi-channel sound field generating functions may be incorporated into the video camera to embody the present invention at real time in the recording and reproduction, or, alternatively, the video and the multi-channel audio are individually recorded to embody the present invention as an application software contained in a computer, and as a non real-time processing at an audio-video file editing time, or a file translation time, or a DVD writing time.
Further, the present invention is also applicable to a purpose of games. In this case, the same sound effects as the above also may be obtained by generating the sound signal in each direction around the display to match a sound source position on a computer graphics (CG) display.
In recent years, a technology also has been developed in which a transparent diaphragm is mounted to a front face of the display, for instance, to reproduce the sound field by vibrating the diaphragm with the sound signal without using any speaker around the display. The present invention also may be embodied by taking advantage of a sound output means described above.
The present document contains subject matter related to Japanese Patent Application JP 2004-248249 filed in the Japanese Patent Office on Aug. 27, 2004, the entire contents of which are incorporated herein by reference.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
2004-248249 | Aug 2004 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
6298942 | Schlatmann et al. | Oct 2001 | B1 |
7206418 | Yang et al. | Apr 2007 | B2 |
7599502 | Stromme | Oct 2009 | B2 |
7602924 | Kleen | Oct 2009 | B2 |
20010055059 | Satoda | Dec 2001 | A1 |
20020159603 | Hirai et al. | Oct 2002 | A1 |
20050111674 | Hsu | May 2005 | A1 |
20050146601 | Chu et al. | Jul 2005 | A1 |
20050152565 | Jouppi et al. | Jul 2005 | A1 |
Number | Date | Country |
---|---|---|
1 035 732 | Sep 2000 | EP |
01-178952 | Jul 1989 | JP |
06-035489 | Feb 1994 | JP |
06-062349 | Mar 1994 | JP |
06-090492 | Mar 1994 | JP |
06-327090 | Nov 1994 | JP |
2000-010756 | Jan 2000 | JP |
2000-298933 | Oct 2000 | JP |
2000-299842 | Oct 2000 | JP |
2002-191098 | Jul 2002 | JP |
2003-264900 | Sep 2003 | JP |
WO-0018112 | Mar 2000 | WO |
Number | Date | Country | |
---|---|---|---|
20060044419 A1 | Mar 2006 | US |