The disclosure relates generally to image processing and more particularly to systems and methods for rolling shutter correction using an iterative image correction process.
Images captured with CMOS sensors and/or other sensors use a rolling shutter that exposes rows of pixels to light at slightly different times during image capture. This slight temporal shift between each row start may result in a deformed image caused by image capture device motion. Some image correction solutions may not account for high frequency image capture device motion and require dedicated hardware.
One or more aspects of the present disclosure relate to performing image correction for rolling shutter. Image correction may include correction of image deformities such as wobble, for example, and/or other deformities. Image deformities may occur due to rolling shutter that exposes rows of pixels to light at slightly different times during image capture. Corrected image may account for image capture device movement between exposures of rows of pixels.
Image capture component may be configured to obtain one or more input images captured by one or more imaging sensors of one or more image capture devices. By way of non-limiting illustration, an input image may be obtained from an imaging sensor of an image capture device.
One or more imaging sensors used to capture the input image may be one or more rigidly mounted active pixel sensors such as CMOS and/or other sensors. The input image captured by the imaging sensor may comprise an array of pixels. The input image captured by the imaging sensor may comprise photosites or pixels acquired at slightly different moments in time. Contents of pixels in the pixel array may be captured on a row-by-row basis so that contents of an individual row of pixels may be captured after a time delay relative to the acquisition of contents of another row of pixels within the array. Although the rolling shutter configuration described in this application involves row-by-row capture this is not intended to be limiting, and the concepts described herein are equally applicable to any image sensor that captures or “reads out” photosites or pixels on a subset-by-subset basis regardless of whether or not the subsets are rows.
Pixels of the pixel array may be characterized by pixel contents. Pixel contents may include pixel position information specifying the individual pixel positions within the pixel array. Pixel position within the array may be expressed in terms of x and y coordinates, and/or other coordinate schemes. Contents of pixels in the pixel array may include pixel values within the pixel array. Pixel value may include a numeric representation associated with specific colors in a color space (e.g., RGB, CMYK, and/or other color spaces).
In some implementations, the image capture device may include a lens. The lens may be configured to provide light waves to the imaging sensor. The lens may be characterized by a lens projection. In some implementations, the image may be configured based on the lens projection and the input image may be configured in an equirectangular projection and/or other lens projection. In some implementations, the lens projection may be characterized by a rectilinear transformation and/or other lens projection.
The imaging sensor may typically capture an image over a time window configured between approximately 4 milliseconds and 1000 milliseconds, depending on an exposure and/or an image frame rate. During image capture, the imaging sensor performs row-by-row capture of pixel values wherein a given row of pixels of the image may be sampled at a given time. The row-to-row delay may result in a “rolling shutter” effect, where objects in the resultant image (i.e. the input image) may be skewed (tilted to the left or right, depending on the direction of image capture device or subject movement).
The acquisition time component may be configured to obtain acquisition time specifying time of capture of one or more sets of pixels within the pixel array of the input image. Pixel acquisition time may be determined using an imaging sensor internal clock and/or imaging sensor configurations. Imaging sensor configurations may determine a maximum frame rate and/or other imaging sensor parameters. Pixel acquisition time may be determined based on image exposure duration and position of the pixel in the image array (pixel row index). The acquisition time component may be configured to associate acquisition time specifying time of capture of one or more sets of pixels within the pixel array with row locations corresponding to pixels in the input image.
The orientation component may be configured to obtain one or more orientation information specifying imaging sensor orientations at the acquisition times of the sets of input pixels within the input pixel array. The orientation component may be configured to obtain orientation information based on an analysis of the first pixel row of the input image and the second pixel row of the input image. In some implementations, the orientation component may be configured to be coupled to an orientation sensor. The orientation sensor may comprise one or more of a gyroscope, accelerometer, and/or other orientation sensors.
The orientation sensor (IMU) may be configured to provide orientation information of the image capture device. The orientation sensor may comprise one or more of a gyroscope, accelerometer, and/or other orientation sensors. The motion of the image capture device may cause changes in the image capture device orientation when different portions (e.g., different rows) of the input image are captured.
Images may capture image capture device movements during image acquisition which may result in deformities caused by changes in orientation of the imaging sensor at the pixel acquisition time instance. Image deformities may be corrected by determining the changes in orientation of the imaging sensor at the pixel acquisition time instance. Obtaining orientation of the imaging sensor at the pixel acquisition time instance may be used in correcting the input image deformities associated with the motion of the image capture devise. It will be recognized by those skilled in the arts that various other image characteristics configured for individual input images, including but not limited to lens aperture, exposure value, depth of focus, color content, image sharpness, saturation, white balance, field of view, resolution, image size, lens projection (e.g., equirectangular, rectilinear), and/or other parameters may be used when correcting image deformities.
The output pixel array component may be configured to determine an output pixel array based on the input image. The output pixel array may represent a pixel array of the input image obtained by the input image component that has been transformed to account for changes in the imaging sensor orientation at different acquisition times, as described in U.S. patent application Ser. No. 15/387,468, which is incorporated by reference in its entirety.
By way of non-limiting illustration, a first output cell may be determined based on a first orientation information specifying a first imaging sensor orientation obtained at a first acquisition time of first set of pixels of the input image. The output pixel array component may be configured to determine a second output cell based on differences between the first output cell (determined based on the first orientation information specifying the first imaging sensor orientation obtained at the first acquisition time of first set of pixels of the input image) and a second orientation information specifying a second imaging sensor orientation obtained at a second acquisition time of a second pixel of the input image.
The iteration component may be configured to determine an output image based on the output pixel array and the input image. The output image may represent corrected input image obtained by the input image component by determining a location of a pixel within the input image corresponding to a location of a pixel within the output pixel array obtained by the output pixel array component by performing one or more fixed point iterations. The iteration component may be configured to determine pixel locations on a sensor line y within the input image using imaging sensor orientations R(y) at a time the imaging sensor has acquired a row of pixels in the input image corresponding to the output cell within the output pixel array. Sensor line y may correspond to the x-axis of the input pixel array expressed in terms of x and y coordinates, and/or other positions. The iteration component may be configured to determine pixel locations on an imaging sensor line y within the input image by performing one or more fixed point iterations.
In some implementations, a pixel location within the input image may be determined using fixed point iterations expressed as yn+1=f(yn), n=1, 2, 3, . . . N. In one or more implementations, number of iterations may be determined on a difference between sensor line yn of a given iteration and sensor line yn−1 of a preceding iteration. Number of iterations may be limited until no significant difference between successive yn is found.
In some implementations, during an initial iteration n=1, imaging sensor orientation R(y) may be used by the iteration component. At a subsequent iteration n>1, a pixel may be selected in the output pixel array. A corresponding row yn+1 of the input image may be determined using fixed point iteration process.
The iteration component may be configured to determine output pixel positions by performing one or more fixed point iterations to identify one or more input pixels within the input image that correspond to one or more pixels in the output pixel array. The iteration component may be configured to determine output pixel values based on pixel values of the input image.
These and other objects, features, and characteristics of the system and/or method disclosed herein, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. As used in the specification and in the claims, the singular form of “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise.
Users may capture images using image capture device 126. Image capture device 126 may include one or more of a computing platform, a mobile device (e.g., a smart phone, a tablet, and/or other mobile device), a camera (e.g., an action camera, a sports camera, and/or other type of camera), a video recorder, and/or other device configured to capture images and/or video segments. Image capture device 126 may include one or more sensors including one or more of a GPS, a gyro, a compass, an accelerometer, a temperature sensor, a pressure sensor, a depth sensor, imaging sensor, a sound transducer, and/or other sensors. One or more sensors may be located external to image capture device 126 and may provide information obtained via the one or more sensors external to image capture device 126.
Image capture component 106 may be configured to obtain one or more input images captured by one or more imaging sensors of one or more image capture devices. By way of non-limiting illustration, an input image may be obtained from an imaging sensor of an image capture device.
One or more imaging sensors used to capture the input image may be one or more rigidly mounted active pixel sensors such as CMOS and/or other sensors. Such imaging sensors may be produced by a complementary metal-oxide-semiconductor (CMOS) technology for constructing integrated circuits and hence may be referred to as the CMOS image sensor.
The input image captured by the imaging sensor may comprise an array of pixels. The input image captured by the imaging sensor may comprise of photosites or pixels acquired at slightly different moments in time. Contents of pixels in the pixel array may be captured on a row-by-row basis so that contents of an individual row of pixels may be captured after a time delay relative to the acquisition of contents of another row of pixels within the array. Although the rolling shutter configuration described in this application involves row-by-row capture this is not intended to be limiting, and the concepts described herein are equally applicable to any image sensor that captures or “reads out” photosites or pixels on a subset-by-subset basis regardless of whether or not the subsets are rows.
Pixels of the pixel array may be characterized by pixel contents. Pixel contents may include pixel position information specifying the individual pixel positions within the pixel array. Pixel position within the array may be expressed in terms of x and y coordinates, and/or other coordinate schemes. Contents of pixels in the pixel array may include pixel values within the pixel array. Pixel value may include a numeric representation associated with specific colors in a color space (e.g., RGB, CMYK, and/or other color spaces). For example, a pixel having a value of (1, 0, 0) in the RGB color space may be associated with a pure red color. Other types of pixel values may control opacity and/or other information for the pixel.
In some implementations, the image capture device may include a lens. The lens may be configured to provide light waves to the imaging sensor. The lens may be characterized by a lens projection. In some implementations, the input image may be configured based on the lens projection and the input image may be configured in an equirectangular projection and/or other lens projection. In some implementations, the lens projection may be characterized by a rectilinear transformation and/or other lens projection.
The imaging sensors may typically capture an image over a time window configured between approximately 4 milliseconds and 1000 milliseconds, depending on an exposure and/or an image frame rate. During image capture, the imaging sensor performs row-by-row capture of pixel values wherein a given row of pixels of the image may be sampled at a given time. The row-to-row delay may result in a “rolling shutter” effect, where objects in the resultant image (i.e. the input image) may be skewed (tilted to the left or right, depending on the direction of image capture device or subject movement). For example, when tracking a car moving at high speed, the car may not appear distorted but the background may appear tilted.
In some implementations, images obtained by imaging sensors characterized by row-by-row image acquisition process may be applicable to other sensor technologies and/or to other image configurations. By way of a non-limiting illustration, an imaging sensor may comprise two or more imaging sensor components wherein the first imaging sensor component may sense incoming waves (e.g., light, radio frequency waves, pressure waves) at a first time instance, while the second image sensor component may sense the incoming waves at a second time instance, e.g., subsequent to the first time instance. For example, an imaging sensor may comprise two charge-couple device (CCD) imaging sensor components (e.g., one covering one (e.g., left) portion of a view field and one covering another (e.g., right) portion of the view) configured to acquire pixels at two time instances (e.g., first pixels of one CCD imaging sensor component and subsequently pixels of the other CCD imaging sensor component).
In some implementations, an imaging sensor may be configured to acquire (scan) a portion of an image at a given time. The image portion may include multiple pixels arranges in a variety of configurations, e.g., multiple rows, partial row, polygon, frame-like, and/or other shapes.
In some implementations, imaging sensor may be configured to capture images using exposure duration parameter and/or other parameters. Exposure duration parameter may be configured in accordance with a variety of parameters, image acquisition rate (frame rate), amount of light (ambient and/or flash), sensor sensitivity setting (e.g., ISO setting), sensor dimension, and/or other parameters. In some implementations, exposure time may be configured from the range between 1/500 of a second to 120 seconds. In some implementations, the row-to row acquisition time (transfer time) parameter may be configured based on sensor pixel resolution, frame rate, and/or other imaging sensor parameters; and may be selected between 2 microsecond (us) and 10,000 us.
Acquisition time component 108 may be configured to obtain acquisition time specifying time of capture of one or more sets of pixels within the pixel array of the input image. Pixel acquisition time may be determined using an imaging sensor internal clock and/or imaging sensor configurations. Imaging sensor configurations may determine a maximum frame rate and/or other imaging sensor parameters. Pixel acquisition time may be determined based on image exposure duration and position of the pixel in the image array (pixel row index). By way of a non-limiting illustration, a first acquisition time specifying a time of capture of a first set of pixels may be obtained within a pixel array of the input image. Acquisition time component 108 may be configured to obtain a second acquisition time specifying a time of capture of a second set of pixels within the pixel array of the input image. Acquisition time component may be configured to associate acquisition time specifying time of capture of one or more sets of pixels within the pixel array with row locations corresponding to pixels in the input image.
Acquisition time component 108 may be configured to associate acquisition time specifying time of capture of one or more sets of pixels within the pixel array with row locations corresponding to pixels in the input image. By way of a non-limiting illustration, a first acquisition time may be associated with a first row of the input image. Acquisition time component 108 may be configured to associate a second acquisition time with a second row of the input image.
For example, and as illustrated in
As illustrated in
Referring again to
Orientation component 110 may be configured to obtain orientation information based on an analysis of the first pixel row of the input image and the second pixel row of the input image. In some implementations, orientation component 110 may be configured to be coupled to an orientation sensor. The orientation sensor may comprise one or more of a gyroscope, accelerometer, and/or other orientation sensors.
The orientation sensor (IMU) may be configured to provide orientation information. The orientation sensor may comprise one or more of a gyroscope, accelerometer, and/or other orientation sensors. The motion of the image capture device may cause changes in the image capture device orientation when different portions (e.g., different rows) of the input image are captured.
Images may reflect image capture device movements during image acquisition, which may result in image deformities caused by changes in orientation of the imaging sensor at the pixel acquisition time instance. For example, and as illustrated in
Image deformities may be corrected by determining the changes in orientation of the sensor at the pixel acquisition time instance. Obtaining orientation of the imaging sensor at the pixel acquisition time instance may be used in correcting the input image deformities associated with the motion of the image capture devise. It will be recognized by those skilled in the arts that various other image characteristics configured for individual input images, including but not limited to lens aperture, exposure value, depth of focus, color content, image sharpness, saturation, white balance, field of view, resolution, image size, lens projection (e.g., equirectangular, rectilinear), and/or other parameters may be used when correcting image deformities.
Image capture device imaging sensor orientation R(y), at the time of capturing a given row of the image (row y), may depend on the image capture device's rotation and transfer time (the time between the capture of subsequent rows in the image capture device). In some implementations, the transfer time may be determined during image capture device manufacturing and/or testing. In one or more implementations, the transfer time may be estimated by minimizing the projection error of overlapping parts of two or more source images. The projection error may be determined based on a measured of an average distance between matched control points using a key point detector methodology, such as SIFT-based key point detector methodology and/or other methodology.
In some implementations, wherein the orientation information may be available at time intervals that may be greater than the row acquisition and transfer time interval an interpolation process may be applied to the orientation information in order to obtain orientation corresponding to the row acquisition time instant. The interpolation process may include selection of a closest in time value, linear interpolation, quadratic interpolation and/or other method.
Output pixel array component 112 may be configured to determine an output pixel array based on the input image. The output pixel array may represent a pixel array of the input image obtained by input image component 106 that has been transformed to account for changes in the imaging sensor orientation at different acquisition times, as described in U.S. patent application Ser. No. 15/387,468, which is incorporated by reference in its entirety.
By way of non-limiting illustration, a first output cell may be determined based on a first orientation information specifying a first imaging sensor orientation obtained at a first acquisition time of first set of pixels of the input image. Output pixel array component 112 may be configured to determine a second output cell based on differences between the first output cell (determined based on the first orientation information specifying the first imaging sensor orientation obtained at the first acquisition time of first set of pixels of the input image) and a second orientation information specifying a second imaging sensor orientation obtained at a second acquisition time of a second pixel of the input image.
Iteration component 114 may be configured to determine an output image based on the output pixel array and the input image. The output image may represent corrected input image obtained by input image component 106 by determining a location of a pixel within the input image corresponding to a location of a pixel within the output pixel array obtained by output pixel array component 112 by performing one or more fixed point iterations.
Iteration component 114 may be configured to determine pixel locations on a sensor line y within the input image using imaging sensor orientations R(y) at a time the imaging sensor has acquired a row of pixels in the input image corresponding to the output cell within the output pixel array. Iteration component 114 may be configured to determine pixel locations on an imaging sensor line y within the input image by performing one or more fixed point iterations.
In some implementations, a pixel location within the input image may be determined using fixed point iterations expressed as yn+1=f(yn), n=1, 2, 3, . . . N. Number of iterations N may be selected between 3 and 11, in some implementations. In one or more implementations, number of iterations may be determined on a difference between sensor line yn of a given iteration and sensor line yn−1 of a preceding iteration. Number of iterations may be limited until no significant difference between successive yn is found.
In some implementations, during an initial iteration n=1, imaging sensor orientation R(y) may be used by iteration component 114. At a subsequent iteration n>1, a pixel may be selected in the output pixel array. A corresponding row yn+1 of the input image may be determined using fixed point iteration process.
For example, and as illustrated by
At iteration 405, it may be determined that sensor line 413 may be acquired at acquisition time 425 within input image 421. Imaging sensor orientation at acquisition time 425 may be interpolated as 431 and output pixel 403 may correspond to pixel 409 within input image 421 with orientation 431. Pixel 409 may be located on sensor line 423 within input image 421.
At iteration 407, it may be determined that sensor line 423 may be acquired at acquisition time 427 within input image 421. Imaging sensor orientation acquired at acquisition time 427 may be interpolated as 435 and output pixel 403 may correspond to pixel 411 within input image 421 with orientation 435. Pixel 411 may be located on sensor line 433 within input image 421. By performing fixed point iteration process 400, thus, pixel 411 may be determined as corresponding to output pixel 403 within output pixel array 401.
Referring again to
For example, and as illustrated by
Referring again to
A given client computing platform 104 may include one or more processors configured to execute computer program components. The computer program components may be configured to enable a producer and/or user associated with the given client computing platform 104 to interface with system 100 and/or external resources 120, and/or provide other functionality attributed herein to client computing platform(s) 104. By way of non-limiting example, the given client computing platform 104 may include one or more of a desktop computer, a laptop computer, a handheld computer, a NetBook, a Smartphone, a gaming console, and/or other computing platforms.
External resources 120 may include sources of information, hosts and/or providers of virtual environments outside of system 100, external entities participating with system 100, and/or other resources. In some implementations, some or all of the functionality attributed herein to external resources 120 may be provided by resources included in system 100.
Image capture device 126 may include electronic storage 122, one or more processors 124, and/or other components. Image capture device 126 may include communication lines, or ports to enable the exchange of information with a network and/or other computing platforms. Illustration of image capture device 126
Electronic storage 122 may include electronic storage media that electronically stores information. The electronic storage media of electronic storage 122 may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with image capture device 126 and/or removable storage that is connectable to image capture device 126 via, for example, a port (e.g., a USB port, a Firewire port, etc.) or a drive (e.g., a disk drive, etc.). Electronic storage 122 may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media.
Electronic storage 122 may be a separate component within image capture device 126, or electronic storage 122 may be provided integrally with one or more other components of within image capture device 126 (e.g., processor 124).
Although electronic storage 122 is shown in
Processor(s) 124 may be configured to provide information processing capabilities in image capture device 126. As such, processor(s) 124 may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Although processor(s) 124 is shown in
It should be appreciated that although components 106, 108, 110, 112 and/or 114 are illustrated in
In some implementations, method 600 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information).
The one or more processing devices may include one or more devices executing some or all of the operations of method 600 in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 600.
At an operation 602, an input image array of an input image captured by an imaging sensor an imaging capture device may be obtained. Operation 602 may be performed by one or more physical processors executing an input image component that is the same as or similar to input image component 106, in accordance with one or more implementations.
At an operation 604, a first acquisition time specifying capture times of a first pixel within the pixel array may be obtained and/or a second acquisition time specifying capture times of a second pixel within the pixel array may be obtained. Operation 604 may be performed by one or more physical processors executing an acquisition time component that is the same as or similar to acquisition time component 108, in accordance with one or more implementations.
At an operation 606, a first orientation information specifying imaging sensor orientation at the first acquisition time of the first pixel may be obtained and/or a second orientation information specifying imaging sensor orientation at the second acquisition time of the second pixel may be obtained. Operation 606 may be performed by one or more physical processors executing an orientation component that is the same as or similar to orientation component 110, in accordance with one or more implementations.
At an operation 608, an output pixel array of output cells based on the input pixel array may be determined. The cells of the output pixel array may be transformed based on differences between imaging sensor orientations at the different acquisition times within the input image. Operation 608 may be performed by one or more physical processors executing an output pixel array component that is the same as or similar to output pixel array component 112, in accordance with one or more implementations.
At an operation 610, one or more fixed point iterations to identify one or more input pixels within the input pixel array of the input image that corresponds to one the output pixels may be performed. At an operation 612, a value of the output pixel based on a value of a corresponding pixel within the input image may be determined. Operations 610 and 612 may be performed by one or more physical processors executing an iteration component that is the same as or similar to iteration component 114, in accordance with one or more implementations.
Although the system(s) and/or method(s) of this disclosure have been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the disclosure is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present disclosure contemplates that, to the extent possible, one or more features of any implementation can be combined with one or more features of any other implementation.
Although the invention has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the invention is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present invention contemplates that, to the extent possible, one or more features of any embodiment can be combined with one or more features of any other embodiment.
Number | Name | Date | Kind |
---|---|---|---|
5260779 | Wasserman | Nov 1993 | A |
5555895 | Ulmer | Sep 1996 | A |
6434265 | Xiong | Aug 2002 | B1 |
6486908 | Chen | Nov 2002 | B1 |
6710740 | Needham | Mar 2004 | B2 |
6711293 | Lowe | Mar 2004 | B1 |
6788333 | Uyttendaele | Sep 2004 | B1 |
7057663 | Lee | Jun 2006 | B1 |
7403224 | Fuller | Jul 2008 | B2 |
7983502 | Cohen | Jul 2011 | B2 |
8411166 | Miyata | Apr 2013 | B2 |
8606073 | Woodman | Dec 2013 | B2 |
8670030 | Tanaka | Mar 2014 | B2 |
8842197 | Singh | Sep 2014 | B2 |
8890954 | ODonnell | Nov 2014 | B2 |
8896694 | O'Donnell | Nov 2014 | B2 |
9019396 | Kiyoshige | Apr 2015 | B2 |
9158304 | Fleck | Oct 2015 | B2 |
9342534 | Singh | May 2016 | B2 |
9409646 | Fleck | Aug 2016 | B2 |
9473758 | Long | Oct 2016 | B1 |
9602795 | Matias | Mar 2017 | B1 |
20020112005 | Namias | Aug 2002 | A1 |
20020122113 | Foote | Sep 2002 | A1 |
20020191087 | Hashimoto | Dec 2002 | A1 |
20030085992 | Arpa | May 2003 | A1 |
20030098954 | Amir | May 2003 | A1 |
20030160862 | Charlier | Aug 2003 | A1 |
20040010804 | Hendricks | Jan 2004 | A1 |
20040017492 | Stavely | Jan 2004 | A1 |
20040021780 | Kogan | Feb 2004 | A1 |
20040047606 | Mikawa | Mar 2004 | A1 |
20040075738 | Burke | Apr 2004 | A1 |
20040135900 | Pyle | Jul 2004 | A1 |
20040169724 | Ekpar | Sep 2004 | A1 |
20050033760 | Fuller | Feb 2005 | A1 |
20050062869 | Zimmermann | Mar 2005 | A1 |
20050104976 | Currans | May 2005 | A1 |
20050134707 | Perotti | Jun 2005 | A1 |
20050289111 | Tribble | Dec 2005 | A1 |
20060050997 | Imamura | Mar 2006 | A1 |
20070030358 | Aoyama | Feb 2007 | A1 |
20070053659 | Kiyama | Mar 2007 | A1 |
20070120986 | Nunomaki | May 2007 | A1 |
20070140662 | Nunomaki | Jun 2007 | A1 |
20070154202 | Lee | Jul 2007 | A1 |
20070300249 | Smith | Dec 2007 | A1 |
20080094499 | Ueno | Apr 2008 | A1 |
20080118100 | Hayashi | May 2008 | A1 |
20090160957 | Deng | Jun 2009 | A1 |
20090202177 | Jeffrey | Aug 2009 | A1 |
20090210707 | De Lutiis | Aug 2009 | A1 |
20090251558 | Park | Oct 2009 | A1 |
20090262206 | Park | Oct 2009 | A1 |
20090271447 | Shin | Oct 2009 | A1 |
20100045773 | Ritchey | Feb 2010 | A1 |
20100097443 | Lablans | Apr 2010 | A1 |
20100238304 | Miyata | Sep 2010 | A1 |
20100250022 | Hines | Sep 2010 | A1 |
20100289924 | Koshikawa | Nov 2010 | A1 |
20100299630 | McCutchen | Nov 2010 | A1 |
20110013778 | Takumai | Jan 2011 | A1 |
20110115883 | Kellerman | May 2011 | A1 |
20110141300 | Stec | Jun 2011 | A1 |
20110176014 | Hong | Jul 2011 | A1 |
20110176043 | Baker | Jul 2011 | A1 |
20110261227 | Higaki | Oct 2011 | A1 |
20110267514 | D'Angelo | Nov 2011 | A1 |
20120092559 | Ubillos | Apr 2012 | A1 |
20120206565 | Villmer | Aug 2012 | A1 |
20120242798 | Mcardle | Sep 2012 | A1 |
20130021450 | Yoshizawa | Jan 2013 | A1 |
20130058532 | White | Mar 2013 | A1 |
20130058619 | Miyakawa | Mar 2013 | A1 |
20130127903 | Paris | May 2013 | A1 |
20130176403 | Varga | Jul 2013 | A1 |
20130177168 | Inha | Jul 2013 | A1 |
20130182177 | Furlan | Jul 2013 | A1 |
20130210563 | Hollinger | Aug 2013 | A1 |
20130235226 | Karn | Sep 2013 | A1 |
20140037268 | Ryota | Feb 2014 | A1 |
20140039884 | Chen | Feb 2014 | A1 |
20140071299 | Grundmann | Mar 2014 | A1 |
20140240122 | Roberts | Aug 2014 | A1 |
20150055937 | Van Hoff | Feb 2015 | A1 |
20150058102 | Christensen | Feb 2015 | A1 |
20150142211 | Shehata | May 2015 | A1 |
20150142742 | Hong | May 2015 | A1 |
20150166476 | Chen | Jun 2015 | A1 |
20150186073 | Pacurariu | Jul 2015 | A1 |
20150189221 | Nakase | Jul 2015 | A1 |
20150287435 | Land | Oct 2015 | A1 |
20150288754 | Mosko | Oct 2015 | A1 |
20150304532 | Bart | Oct 2015 | A1 |
20150336015 | Blum | Nov 2015 | A1 |
20150350614 | Meier | Dec 2015 | A1 |
20150363648 | Li | Dec 2015 | A1 |
20150367958 | Lapstun | Dec 2015 | A1 |
20150370250 | Bachrach | Dec 2015 | A1 |
20160005435 | Campbell | Jan 2016 | A1 |
20160018822 | Nevdahs | Jan 2016 | A1 |
20160031559 | Zang | Feb 2016 | A1 |
20160054737 | Soll | Feb 2016 | A1 |
20160076892 | Zhou | Mar 2016 | A1 |
20160098469 | Allinson | Apr 2016 | A1 |
20160101856 | Kohstall | Apr 2016 | A1 |
20160112713 | Russell | Apr 2016 | A1 |
20160129999 | Mays | May 2016 | A1 |
20160139596 | Na | May 2016 | A1 |
20160139602 | Kohstall | May 2016 | A1 |
20160165563 | Jang | Jun 2016 | A1 |
20160179096 | Bradlow | Jun 2016 | A1 |
20160189101 | Kantor | Jun 2016 | A1 |
20160234438 | Satoh | Aug 2016 | A1 |
20160239340 | Chauvet | Aug 2016 | A1 |
20160269621 | Cho | Sep 2016 | A1 |
20160295108 | Cao | Oct 2016 | A1 |
20160304198 | Jourdan | Oct 2016 | A1 |
20160306351 | Fisher | Oct 2016 | A1 |
20160313734 | Enke | Oct 2016 | A1 |
20160327950 | Bachrach | Nov 2016 | A1 |
20160336020 | Bradlow | Nov 2016 | A1 |
20160366290 | Hoshino | Dec 2016 | A1 |
20170048334 | Shibahara | Feb 2017 | A1 |
20170236257 | Wu | Aug 2017 | A1 |
20170272668 | Ribardo, Jr. | Sep 2017 | A1 |
20170332018 | Bell | Nov 2017 | A1 |
Number | Date | Country |
---|---|---|
0605045 | Jul 1994 | EP |
0650299 | Apr 1995 | EP |
0661672 | Jul 1995 | EP |
2334058 | Sep 2016 | EP |
2009047572 | Apr 2009 | WO |
2010073192 | Jul 2010 | WO |
2014090277 | Jun 2014 | WO |
Entry |
---|
Wang, Yu-Shuen, et al., “Optimized Scale-and-Stretch for Image Resizing”, ACM Transactions on Graphics (TOG), ACM, vol. 27, No. 5, Article 118, Dec. 1, 2008, XP002609797, 8 pages. |
Hwang, Daw-Sen, et al., “Content-Aware Image Resizing Using Perceptual Seam Carving with Human Attention Model”, International Conference on Multimedia and Expo, 2008 IEEE, Piscataway, NJ, Jun. 23, 2008, XP032965469, pages 1029-1032. |
Avidan, S., et al., “Seam Carving for Content-Aware Image Resizing”, ACM Transactions on Graphics (TOG), ACM, vol. 26, No. 3, Article 10, Jul. 1, 2007, XP007904203, 10 pages. |
Mai Zheng et al, Stitching Video from Webcams, Advances in Visual Computing: 4th International Symposium, ISVC 2008, Las Vegas, NV, USA, Dec. 1-3, 2008. Proceedings, Part II, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 420-429, ISBN 978-3-540-89645-6, XP019112243. |
Farin et al., “Shortest Circular Paths on Planar Graphs,” In 27th Symposium on Information Theory in the Benelux 2006, 8 pgs. |
Zhi et al., “Toward Dynamic Image Mosaic Generation With Robustness to Parallax,” IEEE Transactions on Image Processing, vol. 21, No. 1, Jan. 2012, pp. 366-378. |
Ryan Jackson: ‘Shooting 360-degree video with four GoPro Hd Hero cameras / Ryan Jackson Photography’ 8 fvrier 2011 (Feb. 8, 2011), XP055099926, Extrait de l'Internet: URL:http://punkoryan.com/2011/02/08/shooting-360-degree-video-with-four-gopro-hd-hero-cameras [extrait le Feb. 3, 2014] 37 pages. |
Perazzi et al., “Panoramic Video from Unstructured Camera Arrays,” Eurographics, vol. 34 (2015), No. 2, 12pgs. |
U.S. Appl. No. 14/920,427, filed Oct. 22, 2015, entitled “Apparatus and Methods for Embedding Metadata Into Video Stream” 62 pages. |
U.S. Appl. No. 14/949,786, filed Nov. 23, 2015, entitled “Apparatus and Methods for Image Alignment” 67 pages. |
U.S. Appl. No. 14/927,343, filed Oct. 29, 2015, entitled “Apparatus and Methods for Rolling Shutter Compensation for Multi-Camera Systems” 45 pages. |
U.S. Appl. No. 15/001,038, filed Jan. 19, 2016, entitled “Metadata Capture Apparatus and Methods” 54 pages. |
PCT International Search Report for PCT/EP2014/061897 dated Sep. 15, 2014, 3 pages. |
PCT International Search Report for PCT/EP2014/058008 dated May 26, 2014, 3 pages. |
PCT International Search Report for PCT/EP2014/057352 dated Jun. 27, 2014, 3 pages. |
Foote J et al, ‘FlyCam: practical panoramic video and automatic camera control’, Multimedia and Expo, 2000. ICME 2000. 2000 IEEE International Conference on New York, NY, USA Jul. 30-Aug. 2, 2000, Piscataway, NJ, USA,IEEE, US, (Jul. 30, 2000), vol. 3, doi:10.1109/ICME.2000.871033, ISBN 978-0-7803-6536-0, pp. 1419-1422, XP010512772. |
Hossein Afshari et al: ‘The Panoptic Camera: A Plenoptic Sensor with Real-Time Omnidirectional Capability’, Journal of Signal Processing Systems, vol. 70, No. 3, Mar. 14, 2012 (Mar. 14, 2012), pp. 305-328, XP055092066, ISSN: 1939-8018, DOI: 10.1007/s11265-012-0668-4. |
Benjamin Meyer et al, ‘Real-time Free-Viewpoint Navigation from Compressed Multi-Video Recordings’, Proc. 3D Data Processing, Visualization and Transmission (3DPVT), (May 31, 2010), pp. 1-6, URL: http://www.cg.cs.tu-bs.de/media/publications/meyer2010realtime.pdf, (Dec. 3, 2013), XP055091261. |
O'Donovan, A., et al., “Audio-Visual Panoramas and Spherical Audio Analysis using the Audio Camera,” C1 Proceedings of the 16th International Conference on Auditory Display, Jun. 9-15, 2010, pp. ICAD-167-168, can be retrieved at <URL: https://smartech.gatech.edu/bitstream/handle/1853/49858/0%27DonovanDuraiswami201 O.pdf?sequence=1 >. |
O'Donovan, A., et al., “Real Time Capture of Audio Images and their Use with Video,” 2007 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, Oct. 21-24, 2007, pp. 10-13. |
“Spatial Audio Lecture Series,” Regents of the University of California at Berkeley, Center for New Music and Audio Technologies, 2015, 1 Page, [online] [retrieved on Aug. 20, 2015] Retrieved from the internet <URL:http:I/cnmat.berkelev.edu/spatialaudiolectures>. |
Lipski, C.: ‘Virtual video camera’, SIGGRAPH '09: Posters on, SIGGRAPH '09, vol. 0, Jan. 1, 2009 (Jan. 1, 2009), pp. 1-1, XP055091257, New York, New York, USA DOI: 10.1145/1599301.1599394. |
PCT International Search Report and Written Opinion for PCT/US15/38913, dated Sep. 29, 2015, 15 Pages. |
Felix Klose et al, ‘Stereoscopic 3D View Synthesis From Unsynchronized Multi-View Video’, Proc. European Signal Processing Conference (EUSIPCO), Barcelona, Spain, (Sep. 2, 2011), pp. 1904-1909, URL: http://www.cg.cs.tu-bs.de/media/publications/eusipco2011_3d_synth.pdf, (Dec. 3, 2013), XP055091259. |
Butler et al, “Texture Mapping” [online]. Carnegie Mellon University, 2015 [retrieved on Feb. 1, 2018]. Retrieved from the Internet: <URL: https://web.archive.org/web/20151007042049/https://www.cs.cmu.edu/afs/cs/academic/class/15462-f09/www/lec/lec10.pdf >, 9 pages. |