Field
Example aspects of the present invention generally relate to image and audio processing, and more particularly to audio-video compositing for captured data of a scene for social sharing.
Related Art
Cameras and microphones for desktops, laptops, and mobile devices are commonly used to capture user data for the purpose of social sharing. Digital entertainment products (e.g., JibJab) allow users to insert a still image of their face into a scene which is animated and then shared. Mobile video sharing applications (e.g., Viddy®) allow users to record themselves with video effects and share the result. Applications, such as Action Movie FX®, use special effect overlays to combine a live video stream with special effects, thereby allowing users to incorporate special effects into user-created videos.
The example embodiments described herein provide systems, apparatuses, methods, and computer program products for audio-video compositing. In one example embodiment, the method comprises recording and compositing a first video track of an overlay alpha video and a second video track of a video stream and an audio track, and playing back the first and second video tracks and the audio track in real-time.
The features and advantages of the example embodiments presented herein will become more apparent from the detailed description set forth below when taken in conjunction with the drawings.
I. Overview
The example embodiments presented herein are directed to apparatuses, methods, and computer program products for image processing in an environment using consumer devices. This description is not intended to limit the application of the example embodiments presented herein. In fact, after reading the following description, it will be apparent to one skilled in the relevant art(s) how to implement the following example embodiments in alternative environments, such as a services-based environment, a web services-based environment, and/or other environments.
According to one aspect, the example embodiments herein combine a video which provides an alpha channel (overlay) on top of a video stream (such as a live camera stream or pre-recorded video). The alpha channel information is used to create transparent regions and semi-transparent regions so that the user's video stream can be combined with the overlay. Different overlays provide users with different virtual experiences, allowing them to interact with the video in creative ways.
The overlay video may also include a soundtrack, which is mixed in with audio captured from a microphone. After the user records their performance, they can preview the performance to check their work. If they are happy with their work, the final video consisting of the recorded overlay alpha video and the recorded camera video, is composited, and the audio is mixed into a single file which can then be shared via email, social media (e.g., Facebook®, Twitter®), and/or by other means. The resulting video can be played back on a desktop personal computer, a laptop computer, a television, a mobile communication device, and/or any other type of computing device.
The following is a description of certain filters and corresponding video effects that may be provided in accordance with example embodiments herein:
The following is a description of certain audio effects that may be provided in accordance with example embodiments herein.
In one example embodiment, a video feed is resized into a customized container, such as an airplane, a submarine with windows, a spaceship, and/or another type of container. The container can be animated and/or can move around while the live video plays inside the container.
In another example embodiment, face detection and/or custom compositing is performed, including the detection of facial features, such as eyeballs, mouth, and/or other facial features. Other example face detection and/or custom compositing techniques that may be provided in accordance with example embodiments herein include:
According to one example embodiment herein, an interaction is provided between the overlay and the recording video wherein face detection and/or motion information are used to animate and/or change overlays in response to the recorded video.
In another example embodiment, a reaction is recorded wherein a video is sent to a viewer and the viewer's reaction to the video is recorded. The recorded reaction video can then be sent to the sender who can then view the reaction.
In yet a further example embodiment, a story chain video is passed from one person to the next, wherein each person adds their part of the story. The final video can be processed into a single video from all participant clips for social sharing.
Further features and advantages, as well as the structure and operation, of various example embodiments herein are described in detail below with reference to the accompanying drawings.
II. System
More specifically, capture device 101 is a device which may include hardware and/or software for capturing alpha information of a scene, as well as color data of a scene. For example, a color camera and/or CMOS sensor may capture color data such as YUV data, RGB data or data in other color spaces, whereas an infrared sensor or other alpha sensing technology may capture alpha information of the scene (e.g., where a player is standing in three dimensions in relation to other objects). The alpha information and color data may then be transferred to other devices for processing, such as image processing device 102.
Image processing device 102 is a device which processes the alpha information and color data output by capture device 101 in order to generate output for display on display device 103. In one example as shown in
In
Display device 103 outputs image and/or video data from image processing device 102, such as a display of the player combined with video data as shown in
In that regard, while
III. Device
The image processing device 200 may include without limitation a processor device 210, a main memory 225, and an interconnect bus 205. The processor device 210 may include without limitation a single microprocessor, or may include a plurality of microprocessors for configuring the image processing device 200 as a multi-processor system. The main memory 225 stores, among other things, instructions and/or data for execution by the processor device 210. The main memory 225 may include banks of dynamic random access memory (DRAM), as well as cache memory.
The image processing device 200 may further include a mass storage device 230, peripheral device(s) 240, portable storage medium device(s) 250, input control device(s) 280, a graphics subsystem 260, and/or an output display interface 270. For explanatory purposes, all components in the image processing device 200 are shown in
The portable storage medium device 250 operates in conjunction with a nonvolatile portable storage medium, such as, for example, a compact disc read only memory (CD-ROM), to input and output data and code to and from the image processing device 200. In some embodiments, software for storing image data may be stored on a portable storage medium, and may be inputted into the image processing device 200 via the portable storage medium device 250. The peripheral device(s) 240 may include any type of computer support device, such as, for example, an input/output (I/O) interface configured to add additional functionality to the image processing device 200. For example, the peripheral device(s) 240 may include a network interface card for interfacing the image processing device 200 with a network 220.
The input control device(s) 280 provide a portion of the user interface for a user of the image processing device 200. The input control device(s) 280 may include a keypad and/or a cursor control device. The keypad may be configured for inputting alphanumeric characters and/or other key information. The cursor control device may include, for example, a handheld controller or mouse, a trackball, a stylus, and/or cursor direction keys. In order to display textual and graphical information, the image processing device 200 may include the graphics subsystem 260 and the output display interface 270. The output display interface 270 may include hardware for interfacing with a cathode ray tube (CRT) display and/or a liquid crystal display (LCD) such as display device 103. The graphics subsystem 260 receives textual and graphical information, and processes the information for output to the output display interface 270.
Each component of the image processing device 200 may represent a broad category of a computer component of a general and/or special purpose computer. Components of the image processing device 200 are not limited to the specific implementations provided here.
IV. Processes
A. Alpha Video Overlay
In one example embodiment, an alpha overlay video is composited on top of the live camera stream. A user records a performance and can play back the performance in real-time. The composited stream is then stored.
B. Video Effects
In another example embodiment, the video stream is modified in various ways to improve a user's appearance and mask a user's face.
C. Face Detection for Specialized Compositing
In still another example embodiment, the user's face is detected using standard techniques whereby their eyes, mouth, and so on can be individually composited into the scene.
D. Container Compositing
According to another example, the user's video stream is resized and placed in a container object, such as a car, airplane, spaceship, submarine with windows. The container object can also be animated.
E. Audio Effects
The user's voice is changed in pitch or other characteristic in real-time, in another example embodiment. The changed voice is recorded.
F. Reaction Recording
In a further example aspect, a first user creates a recording then sends to a second user. While the second user watches the first user's recording, their reaction is recorded. The second user's recording is then sent back to the first user.
G. Recording Chain
In another example, a first user creates a recording then sends to a second user. The second user creates a recording which is appended to the first recording. The second user sends to a third user and the process repeats until the last user records their part. The final recording is processed into a single recording from all of the individual recordings.
H. Teleprompter
A teleprompter is rendered to provide the user with dialog and stage direction, in accordance with another example embodiment.
I. Interactive Overlays
In still another example embodiment herein, an overlay responds to movement and facial information in video. Examples of this include but are not limited to changing the size, speed, location and type of overlay based on live video information.
At block 603, a video stream is obtained, which may be a live video stream obtained by a camera (block 601) or a pre-recorded video stream (block 602). In some example embodiments, the video stream is obtained by a capture device such as the capture device 101 shown in
The video stream is then processed according to one or more techniques. For example, at block 604, face detection may be performed on the video stream. At block 605, audio effects (such as, for example, the audio effects described above) may be performed on the audio stream. At block 606, video effects (such as, for example, the video effects described above) may be performed on the video stream. At block 607, the video stream may be processed to be interactive. For example, an overlay (e.g., an overlay alpha video) may be generated at block 608. In one example embodiment, the overlay alpha video may be generated at block 608, at least in part, by executing the example pseudocode provided below for implementing alpha compositing using an overlay video and camera stream. Then, based on interaction processing performed at block 606, the overlay may be caused to be responsive to movement and facial information in the video stream.
At block 609, the video stream obtained at block 603 and processed at one or more of blocks 604, 605, 606, and 607 is composited with the overlay generated at block 608. The following is example pseudocode for implementing (e.g., in connection with block 608 and block 609) alpha compositing using an overlay video and a camera stream:
Of course other implementations may also be possible, and this example implementation should not be construed as limiting.
At block 610, a user may preview the video stream obtained at block 603 and processed at one or more of blocks 604, 605, 606, and 607, to check their work.
At block 611, if the user approves of the preview, the video composited at block 609, which consists of the overlay alpha video and the recorded video stream, as well as audio if applicable, is recorded as an audio-video file on a storage device.
At block 612, the audio-video file recorded at block 611 may be shared via email, social media (e.g., Facebook®, Twitter®), and/or by other means.
The audio-video file shared at block 612 can be viewed by a second user (e.g., a viewer) on a desktop personal computer, a laptop computer, a television, a mobile communication device, and/or any other type of computing device.
At block 613, the second user creates a second recording, which is appended to the audio-video file shared at block 612, thereby resulting in a second audio-video file. The second user can then share the second audio-video file with a third user, and so the process may repeat until a final user has appended their recording. The final result is a single audio-video recording consisting of each respective audio-video recording contributed by each user.
At block 614, while the viewer views the audio-video file, their reaction is recorded. The recording of the viewer can then be sent back to the original user.
V. Computer Readable Medium Implementation
The example embodiments described above such as, for example, the systems and procedures depicted in or discussed in connection with
Portions of the example embodiments of the invention may be conveniently implemented by using a conventional general purpose computer, a specialized digital computer and/or a microprocessor programmed according to the teachings of the present disclosure, as is apparent to those skilled in the computer art. Appropriate software coding may readily be prepared by skilled programmers based on the teachings of the present disclosure.
Some embodiments may also be implemented by the preparation of application-specific integrated circuits, field programmable gate arrays, or by interconnecting an appropriate network of conventional component circuits.
Some embodiments include a computer program product. The computer program product may be a storage medium or media having instructions stored thereon or therein which can be used to control, or cause, a computer to perform any of the procedures of the example embodiments of the invention. The storage medium may include without limitation a floppy disk, a mini disk, an optical disc, a Blu-ray Disc, a DVD, a CD or CD-ROM, a micro-drive, a magneto-optical disk, a ROM, a RAM, an EPROM, an EEPROM, a DRAM, a VRAM, a flash memory, a flash card, a magnetic card, an optical card, nanosystems, a molecular memory integrated circuit, a RAID, remote data storage/archive/warehousing, and/or any other type of device suitable for storing instructions and/or data.
Stored on any one of the computer readable medium or media, some implementations include software for controlling both the hardware of the general and/or special computer or microprocessor, and for enabling the computer or microprocessor to interact with a human user or other mechanism utilizing the results of the example embodiments of the invention. Such software may include without limitation device drivers, operating systems, and user applications. Ultimately, such computer readable media further includes software for performing example aspects of the invention, as described above.
Included in the programming and/or software of the general and/or special purpose computer or microprocessor are software modules for implementing the procedures described above.
While various example embodiments of the invention have been described above, it should be understood that they have been presented by way of example, and not limitation. It is apparent to persons skilled in the relevant art(s) that various changes in form and detail can be made therein. Thus, the disclosure should not be limited by any of the above described example embodiments, but should be defined only in accordance with the following claims and their equivalents.
In addition, it should be understood that the figures are presented for example purposes only. The architecture of the example embodiments presented herein is sufficiently flexible and configurable, such that it may be utilized and navigated in ways other than that shown in the accompanying figures.
Further, the purpose of the Abstract is to enable the U.S. Patent and Trademark Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. The Abstract is not intended to be limiting as to the scope of the example embodiments presented herein in any way. It is also to be understood that the procedures recited in the claims need not be performed in the order presented.
The present application claims priority to U.S. Provisional Application No. 61/819,777, filed on May 6, 2013, the entire contents of which are hereby incorporated by reference as if set forth fully herein.
Number | Name | Date | Kind |
---|---|---|---|
4122490 | Lish | Oct 1978 | A |
4357624 | Greenberg | Nov 1982 | A |
4521014 | Sitrick | Jun 1985 | A |
4599611 | Bowker et al. | Jul 1986 | A |
4688105 | Bloch et al. | Aug 1987 | A |
4710873 | Breslow et al. | Dec 1987 | A |
4800432 | Barnett et al. | Jan 1989 | A |
4827344 | Astle et al. | May 1989 | A |
4891748 | Mann | Jan 1990 | A |
4968132 | Ferren | Nov 1990 | A |
5099337 | Cury | Mar 1992 | A |
5144454 | Cury | Sep 1992 | A |
5151793 | Ito et al. | Sep 1992 | A |
5184295 | Mann | Feb 1993 | A |
5249967 | O'Leary et al. | Oct 1993 | A |
5264933 | Rosser et al. | Nov 1993 | A |
5381184 | Gehrmann | Jan 1995 | A |
5428401 | Hinson | Jun 1995 | A |
5500684 | Uya | Mar 1996 | A |
5553864 | Sitrick | Sep 1996 | A |
5566251 | Hanna et al. | Oct 1996 | A |
5681223 | Weinreich | Oct 1997 | A |
5751337 | Allen et al. | May 1998 | A |
5764306 | Steffano | Jun 1998 | A |
5830065 | Sitrick | Nov 1998 | A |
5861881 | Freeman et al. | Jan 1999 | A |
5953076 | Astle et al. | Sep 1999 | A |
5982350 | Hekmatpour et al. | Nov 1999 | A |
5995671 | Nagarajan et al. | Nov 1999 | A |
6061532 | Bell | May 2000 | A |
6072537 | Gurner et al. | Jun 2000 | A |
6072933 | Green | Jun 2000 | A |
6086380 | Chu et al. | Jul 2000 | A |
6122013 | Tamir et al. | Sep 2000 | A |
6126449 | Burns | Oct 2000 | A |
6198503 | Weinreich | Mar 2001 | B1 |
6283858 | Hayes, Jr. et al. | Sep 2001 | B1 |
6285408 | Choi et al. | Sep 2001 | B1 |
6335765 | Daly et al. | Jan 2002 | B1 |
6350199 | Williams et al. | Feb 2002 | B1 |
6351265 | Bulman | Feb 2002 | B1 |
6384821 | Borrel et al. | May 2002 | B1 |
6425825 | Sitrick | Jul 2002 | B1 |
6476874 | Ito et al. | Nov 2002 | B1 |
6522787 | Kumar et al. | Feb 2003 | B1 |
6535269 | Sherman et al. | Mar 2003 | B2 |
6624853 | Latypov | Sep 2003 | B1 |
6750919 | Rosser | Jun 2004 | B1 |
6816159 | Solazzi | Nov 2004 | B2 |
6835887 | Devecka | Dec 2004 | B2 |
6881067 | Tarry | Apr 2005 | B2 |
6919892 | Cheiky et al. | Jul 2005 | B1 |
6937295 | Islam et al. | Aug 2005 | B2 |
6954498 | Lipton | Oct 2005 | B1 |
7015978 | Jeffers et al. | Mar 2006 | B2 |
7027054 | Cheiky et al. | Apr 2006 | B1 |
7034537 | Tsuda et al. | Apr 2006 | B2 |
7079176 | Freeman et al. | Jul 2006 | B1 |
7106906 | Iwamura | Sep 2006 | B2 |
7137892 | Sitrick | Nov 2006 | B2 |
7181081 | Sandrew | Feb 2007 | B2 |
7209181 | Kriegman | Apr 2007 | B2 |
7221395 | Kinjo | May 2007 | B2 |
7230653 | Overton et al. | Jun 2007 | B1 |
7268834 | Lundberg et al. | Sep 2007 | B2 |
7285047 | Gelb et al. | Oct 2007 | B2 |
7319493 | Hata et al. | Jan 2008 | B2 |
7324166 | Joslin et al. | Jan 2008 | B1 |
7400752 | Zacharias | Jul 2008 | B2 |
7495689 | Curtis et al. | Feb 2009 | B2 |
7528890 | Staker et al. | May 2009 | B2 |
7559841 | Hashimoto | Jul 2009 | B2 |
7613350 | Iwamura | Nov 2009 | B2 |
7616264 | Greenberg | Nov 2009 | B1 |
7626617 | Terada | Dec 2009 | B2 |
7646434 | Staker et al. | Jan 2010 | B2 |
7649571 | Staker et al. | Jan 2010 | B2 |
7675520 | Gee et al. | Mar 2010 | B2 |
7716604 | Kataoka et al. | May 2010 | B2 |
7720283 | Sun et al. | May 2010 | B2 |
7752648 | Shelton et al. | Jul 2010 | B2 |
7780450 | Tarry | Aug 2010 | B2 |
7827488 | Sitrick | Nov 2010 | B2 |
7867086 | Sitrick | Jan 2011 | B2 |
20020007718 | Corset | Jan 2002 | A1 |
20020051009 | Ida et al. | May 2002 | A1 |
20020130889 | Blythe et al. | Sep 2002 | A1 |
20030051255 | Bulman et al. | Mar 2003 | A1 |
20030108329 | Adriansen et al. | Jun 2003 | A1 |
20030148811 | Sitrick | Aug 2003 | A1 |
20040100581 | Williams | May 2004 | A1 |
20040152058 | Browne et al. | Aug 2004 | A1 |
20040202382 | Pilu | Oct 2004 | A1 |
20040218100 | Staker et al. | Nov 2004 | A1 |
20050028193 | Candelore et al. | Feb 2005 | A1 |
20050151743 | Sitrick | Jul 2005 | A1 |
20050155086 | Schick et al. | Jul 2005 | A1 |
20050204287 | Wang | Sep 2005 | A1 |
20050215319 | Rigopulos et al. | Sep 2005 | A1 |
20060136979 | Staker et al. | Jun 2006 | A1 |
20060262696 | Woerlee | Nov 2006 | A1 |
20070064120 | Didow et al. | Mar 2007 | A1 |
20070064125 | Didow et al. | Mar 2007 | A1 |
20070064126 | Didow et al. | Mar 2007 | A1 |
20070107015 | Kazama et al. | May 2007 | A1 |
20070122786 | Relan et al. | May 2007 | A1 |
20070189737 | Chaudhri et al. | Aug 2007 | A1 |
20080085766 | Sitrick | Apr 2008 | A1 |
20080195638 | Winberry et al. | Aug 2008 | A1 |
20090015679 | Hayakawa et al. | Jan 2009 | A1 |
20090040385 | Staker et al. | Feb 2009 | A1 |
20090041422 | Staker | Feb 2009 | A1 |
20090059094 | Yi et al. | Mar 2009 | A1 |
20090163262 | Kang | Jun 2009 | A1 |
20090199078 | Caspi et al. | Aug 2009 | A1 |
20090202114 | Morin et al. | Aug 2009 | A1 |
20090208181 | Cottrell | Aug 2009 | A1 |
20090237564 | Kikinis et al. | Sep 2009 | A1 |
20090271821 | Zalewski | Oct 2009 | A1 |
20090280897 | Fitzmaurice et al. | Nov 2009 | A1 |
20090324191 | Reusens et al. | Dec 2009 | A1 |
20100027961 | Gentile | Feb 2010 | A1 |
20100171848 | Peters et al. | Jul 2010 | A1 |
20110188836 | Popkiewicz | Aug 2011 | A1 |
20130060899 | Mao | Mar 2013 | A1 |
20130216206 | Dubin | Aug 2013 | A1 |
Number | Date | Country |
---|---|---|
0 867 839 | Sep 1998 | EP |
2124438 | Nov 2009 | EP |
60-190078 | Sep 1985 | JP |
63-178677 | Jul 1988 | JP |
01-228288 | Sep 1989 | JP |
02-127886 | May 1990 | JP |
03-261279 | Nov 1991 | JP |
04-220885 | Aug 1992 | JP |
05-014810 | Jan 1993 | JP |
05-284522 | Oct 1993 | JP |
09-219836 | Aug 1997 | JP |
11-069228 | Mar 1999 | JP |
11-252459 | Sep 1999 | JP |
2000-209500 | Jul 2000 | JP |
2002-077726 | Mar 2002 | JP |
2002-232783 | Aug 2002 | JP |
2002-281465 | Sep 2002 | JP |
2004-501576 | Jan 2004 | JP |
WO 9306691 | Apr 1993 | WO |
WO 9926160 | May 1999 | WO |
WO 0163560 | Aug 2001 | WO |
WO 0199413 | Dec 2001 | WO |
WO 02071763 | Sep 2002 | WO |
WO 2004040576 | May 2004 | WO |
WO 2005076618 | Aug 2005 | WO |
WO 2006135358 | Dec 2006 | WO |
WO 2007035558 | Mar 2007 | WO |
WO 2007071954 | Jun 2007 | WO |
WO 2009079560 | Jun 2009 | WO |
WO 2010002921 | Jan 2010 | WO |
WO 2012166305 | Dec 2012 | WO |
Entry |
---|
International Search Report and Written Opinion dated Sep. 5, 2014, in connection with International Application No. PCT/US2014/035941 (10 pages). |
A.J. Schofield et al., “A System for Counting People in Video Images using Neural Networks to Identify the Background Scene”, Pattern Recognition, vol. 29, No. 8, pp. 1421-1428, Elsevier Science Ltd., Great Britain, 1996. |
Franco Caroti Ghelli, “A Sequential Learning Method for Boundary Detection”, Pattern Recognition, vol. 21, No. 2, pp. 131-139, Great Britain, 1988. |
G.Y. Georgiannakis et al., “Design of an Immersive Teleconferencing Application”, Institute of Computer Science, FORTH and Department of Computer Science, University of Crete, Dated 1996, 14 pages. |
Nikhil R. Pal et al., “A Review on Image Segmentation Techniques”, Pattern Recognition, vol. 26, No. 9, pp. 1277-1294, Great Britain, 1993. |
R. Braham, “The Digital Backlot”, IEEE Spectrum, vol. 32, Issue 7, pp. 51-63, Jul. 1995. |
PCT International Search Report and Written Opinion of the International Searching Authority for International Patent Application No. PCT/US2009/049303, dated Dec. 7, 2009, 22 pages. |
S. Alliney et al., “On Registration of an Object Translating on a Static Background”, Pattern Recognition, vol. 29, No. 1, pp. 131-141, Elsevier Science Ltd., Great Britain, 1996. |
Til Aach et al., “Disparity-Based Segmentation of Stereoscopic Foreground/Background Image Sequences”, IEEE Transactions on Communications, vol. 42, No. 2/3/4; pp. 673-679, 1994. |
PCT International Search Report and Written Opinion of the International Searching Authority for International Patent Application No. PCT/US2012/036937, dated Sep. 10, 2012, 10 pages. |
R. Gvili et al., “Depth Keying”, Proc. of SPIE-IS&T Electronic Imaging, vol. 5006, Jan. 1, 2003, pp. 564-574. |
R. Crabb et al., “Real-time Foreground Segmentation via Range and Color Imaging”, Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2008, IEEE Computer Society Conf., NJ, USA, Jun. 23, 2008, pp. 1-5. |
B. Huhle et al., “Robust Non-Local Denoising of Colored Depth Data”, Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2008, IEEE Computer Society Conf., NJ, USA, Jun. 23, 2008, pp. 1-7. |
I. Schiller et al., “Improved Video Segmentation by Adaptive Combination of Depth Keying and Mixture-of-Gaussians”, Proc. SCIA, Springer Berlin Heidelberg, LNCS 6688, Jan. 1, 2011, pp. 59-68. |
“Median Filters”, http://pixinsight.com/doc/legacy/le/19_morphological/median_filter/median_filter.html, Pleiades Astrophoto, 2004. |
A. François, “Semantic, Interactive Manipulation of Visual Data”, Ph.D. dissertation / IRIS Technical Report IRIS-01-409, University of Southern California, Los Angeles, Dec. 2000. |
M. Kass et al., “Snakes—Active Contour Models”, International Journal of Computer Vision, 1(4): 321-331 (1987). |
E. Mortensen et al., “Intelligent scissors for image composition”, In: SIGGRAPH '95: Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, New York, NY, USA: ACM Press, pp. 191-198, 1995. |
Y. Boykov, et al.,“Interactive graph cuts for optimal boundary & region segmentation of objects in n-d images”, in International Conference on Computer Vision, 2001. |
“8.8 White Balance”, http://docs.gimp.org/en/gimp-layer-white-balance.html, Version 1.2, Dec. 2000. |
“Histogram Equalization”, http://fourier.eng.hmc.edu/e161/lectures/contrast_transform/node3.html, Sep. 2009. |
Microsoft Research, “Kinect for Windows SDK Beta Programming Guide”, pp. 1-34, Jul. 2011. |
E. Young, et al., “Image Processing & Video Algorithms With CUDA”, in NVISION08, 2008. |
Number | Date | Country | |
---|---|---|---|
20140328574 A1 | Nov 2014 | US |
Number | Date | Country | |
---|---|---|---|
61819777 | May 2013 | US |