This application claims the benefit, under 35 U.S.C. § 365 of International Application PCT/EP2016/073107, filed 28 Sep. 2016, which was published in accordance with PCT Article 21(2) on 6 Apr. 2017 in English and which claims the benefit of European Patent Application No. 15306528.9, filed 29 Sep. 2015.
The present disclosure relates generally to digital recording and photography and more particularly to digital recording and photography via a plenoptic camera using audio based selection of focal plane and depth.
Photography creates durable images by recording light or other electromagnetic radiation. Images are captured electronically by means of an image sensor or chemically by means of a light-sensitive material. Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface, inside a camera during a timed exposure. With an electronic image sensor, an electrical charge is produced at each pixel, which is then processed and stored in a digital image file for further use. In classic photography, the focal surface is approximately a plane or focal plane. The focal surface is perpendicular to the optical axis of the camera and the depth of field is constant along the plane. The images that are captured are basic in configuration since they are limited in range due to these rules relating to focal surface of depth of field. By contrast, light field or plenoptic cameras offer more complex configurations.
A plenoptic camera uses a micro-lens array that is positioned in the image plane of a main lens and before an array of photosensors onto which one micro-image (also called sub-image) is projected. Consequently, each micro-image depicts an area of the captured scene and each pixel associated with that micro-image shows this certain area from the point of view of a certain sub-aperture location on the main lens exit pupil. The raw image of the scene is then obtained as a result of the sum of all the micro-images acquired from respective portions of the photo-sensors array. This raw image contains the angular information of the light-field. Theoretically and computationally, the plenoptic cameras offer the possibility to project superior image captures using complex configurations which are not available when using classic cameras. Unfortunately, however, there are many practical shortcomings that have limited the prior art from taking advantage of the possibilities that can be realized using plenoptic cameras. These limitations have been even more challenging when attempting capturing of video contents.
A method and system is provided for refocusing images captured by a plenoptic camera. In one embodiment the plenoptic camera is in processing with an audio capture device. The method comprises the steps of determining direction of a dominant audio source associated with an image; creating an audio zoom by filtering out all other audio signals except those associated with said dominant audio source; and performing automatic refocusing of said image based on said created audio zoom.
In a different embodiment, an audio based refocusing image system is provided that comprises a plenoptic video camera for capturing video images, audio capturing means for capturing audio associated with the captured images, means for determining a dominant audio source, means for performing an audio signal analysis to determine direction of the dominant audio source, means for identifying an audio scene of interest based on direction of dominant audio source, means for creating an audio zoom by conducting beamforming on audio scene of interest to selectively filter out all other audio signals except those associated with the dominant audio source; and means for providing automatic refocusing of the image based on the created audio zoom.
Additional features and advantages are realized through the techniques of the present invention. Other embodiments and aspects of the invention are described in detail herein and are considered as part of the claimed invention. For a better understanding of the invention with advantages and features, refer to the description and to the drawings.
The invention will be better understood and illustrated by means of the following embodiment and execution examples, in no way limitative, with reference to the appended figures on which:
In
Wherever possible, the same reference numerals will be used throughout the figures to refer to the same or like parts.
It is to be understood that the figures and descriptions of the present invention have been simplified to illustrate elements that are relevant for a clear understanding of the present invention, while eliminating, for purposes of clarity, many other elements found in typical digital multimedia content delivery methods and systems. However, because such elements are well known in the art, a detailed discussion of such elements is not provided herein. The disclosure herein is directed to all such variations and modifications.
In classic photography, the focal surface is a plane perpendicular to the optical axis of the camera. In using plenoptic cameras, similar re-focus properties can be exploited when taking still pictures, as the user interaction remains at a basic level. This is not the case with video captures and live image streaming(s) using a plenoptic camera, as more sophisticated computations are needed. Since scenes and images captured by array of lenses in a plenoptic camera are captured from different angles, and there are different options for choosing the extent of sharpness of different images in a scene, focusing properties of different scenes and images can be challenging. It would be desirable to use an automatic refocusing techniques but it challenging to do so in where the focal planes remain perpendicular to the optical axis. This is because in many instances the focal planes cannot remain perpendicular to the optical axis, especially during an ever changing video or live stream broadcast. Other examples can also be easily imagined. For example, consider a case where an “all-in-focus” mode is utilized. In this case, the captured scene produces images that must all remain intentionally sharp, irrespective of distance. This can amount to infinite depths of fields and arbitrary focus planes that are not perpendicular to the optical axis. In a different example, an “interactive focus” field can be used that allows a user to point and select an object of interest. In this case, the plane of focus must be computationally placed at the right distance for each image. In this case, the focal point is perpendicular to the optical axis only for objects that must be kept in sharp focus. In a similar case, only objects in close proximity may be selected to produce sharp images. In such a case, the depth of fields are kept as a small number constant, and all scene elements at a distance are differently computed than the closer ones. Consequently, as objects get out of focus they appear intentionally blurred. Yet in another case, the camera is disposed such that the focus plane is slanted and therefore the focal planes are not perpendicular to the optical axis.
Referring back to the embodiment of
The system of
Now referring back to
Applying steps 105 to 110 of
In one example, where an audio source localization algorithm is used to further establish audio directions, an audio signal analysis is also performed to provide an approximate estimate of the angular width of the area of interest. The audio source localization technique is then used as to define the angular extent of a region of interest, resulting in a “cone of interest” as shown in
In one embodiment, as shown in
Referring back to
In one embodiment, as shown in
In
In an alternate embodiment (not shown), a user interaction system can be provided where the user selects (i) a direction among the audio based identified candidate directions and (ii) a width. Based on this selection, in one embodiment, an audio beamforming technique can be used such as the one discussed to focus on sounds coming from a particular chosen direction. The final focal surface and depth-of-field are then selected and rendered, as before according to direction and width information.
In the embodiments of
xx(t)=alpha*x(t)+(1−alpha)*y(t),
where alpha is the weighting factor. In this example, the higher value of alpha will mean that audio signal recorded from local microphone position B contributes more to the final audio focus as shown.
Now referring back again to
Number | Date | Country | Kind |
---|---|---|---|
15306528 | Sep 2015 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2016/073107 | 9/28/2016 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2017/055348 | 4/6/2017 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
7483061 | Fredlund | Jan 2009 | B2 |
8803988 | Lee et al. | Aug 2014 | B2 |
9875410 | Oh | Jan 2018 | B2 |
10674057 | Hellier | Jun 2020 | B2 |
20020140804 | Colmenarez et al. | Oct 2002 | A1 |
20030160862 | Charlier | Aug 2003 | A1 |
20080247567 | Kjolerbakken | Oct 2008 | A1 |
20090141915 | Ko et al. | Jun 2009 | A1 |
20100110232 | Zhang et al. | May 2010 | A1 |
20100123785 | Chen et al. | May 2010 | A1 |
20100128145 | Pitts | May 2010 | A1 |
20110317041 | Zurek | Dec 2011 | A1 |
20130076931 | Border | Mar 2013 | A1 |
20130342526 | Ng | Dec 2013 | A1 |
20130342730 | Lee | Dec 2013 | A1 |
20140035959 | Lapstun | Feb 2014 | A1 |
20140104392 | Thorn et al. | Apr 2014 | A1 |
20140254808 | Park et al. | Sep 2014 | A1 |
20150029304 | Park | Jan 2015 | A1 |
20150078556 | Shenoy et al. | Mar 2015 | A1 |
20150130907 | Kim et al. | May 2015 | A1 |
20160117077 | Xu et al. | Apr 2016 | A1 |
20160148057 | Oh | May 2016 | A1 |
20170287499 | Duong | Oct 2017 | A1 |
20180249178 | Guillotel | Aug 2018 | A1 |
20200123785 | Hog | Mar 2020 | A1 |
Number | Date | Country |
---|---|---|
101278596 | Oct 2008 | CN |
104281397 | Jan 2015 | CN |
104704821 | Jun 2015 | CN |
104780341 | Jul 2015 | CN |
2690886 | Jan 2014 | EP |
2005354223 | Dec 2005 | JP |
2011211291 | Oct 2011 | JP |
WO2014080074 | May 2014 | WO |
WO 2014094874 | Jun 2014 | WO |
Entry |
---|
Hegedus, M. et al., “Sensor Fusion Method for Passive Acoustic Arrays,” 2009, IEEE Pacific Rim Conference on Communications, Computers and Signal Processing. |
Chakraborty, R. et al., “Sound-Model_Based Acoustic Source Localization Using Distributed Mircrophone Arrays”, 2014, IEEE International Conference on Acoustics, Speech and Signal Processing. |
Nakadai, K. et al., “Footstep Detection and Classification Using Distributed Microphones,” 2013, 14th International Workshop on Image Analysis for Multimedia Interactive Services. |
ISR for EP15306528.9 dated Apr. 7, 2016. |
ISR for PCTEP2016073107 dated Dec. 5, 2016. |
Bitzer et al, “Superdirectional microphone array,” in Microphone Arrays, Springer Verlag, 2010, ch.2, pp. 19-38. |
Blandin et al., “Multi-source TDOA estimation in reverberant audio using angular spectra and clustering,” Signal Processing, Elsevier, 2012, 92, pp. 1950-1960. |
Pavlidi et al. “Real-Time Multiple Sound Source Localization and Counting using a Circular Microphone Array,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, No. 10, 2013. |
Ng, R., et al., “Light field photography with a hand-held plenoptic camera”, 2005 Computer Science Technical Report CSTR, 2(11). |
Gu, et al., “Robust Adaptive Beamforming Based on Interference Covariance Matrix Reconstruction and Steering Vector Estimation”, IEEE Transactions on Signal Processing, vol. 60, No. 7, Jul. 2012. |
Number | Date | Country | |
---|---|---|---|
20180288307 A1 | Oct 2018 | US |