The subject matter disclosed herein relates to generating an image of a scene. In particular, the subject matter disclosed herein relates to methods, systems, and computer-readable storage media for generating three-dimensional images of a scene.
Stereoscopic, or three-dimensional, imagery is based on the principle of human vision. Two separate detectors detect the same object or objects in a scene from slightly different positions and/or angles and project them onto two planes. The resulting images are transferred to a processor which combines them and gives the perception of the third dimension, i.e. depth, to a scene.
Many techniques of viewing stereoscopic images have been developed and include the use of colored or polarizing filters to separate the two images, temporal selection by successive transmission of images using a shutter arrangement, or physical separation of the images in the viewer and projecting them separately to each eye. In addition, display devices have been developed recently that are well-suited for displaying stereoscopic images. For example, such display devices include digital still cameras, personal computers, digital picture frames, set-top boxes, high-definition televisions (HDTVs), and the like.
The use of digital image capture devices, such as digital still cameras, digital camcorders (or video cameras), and phones with built-in cameras, for use in capturing digital images has become widespread and popular. Because images captured using these devices are in a digital format, the images can be easily distributed and edited. For example, the digital images can be easily distributed over networks, such as the Internet. In addition, the digital images can be edited by use of suitable software on the image capture device or a personal computer.
Digital images captured using conventional image capture devices are two-dimensional. It is desirable to provide methods and systems for using conventional devices for generating three-dimensional images. In addition, it is desirable to provide methods and systems for aiding users of image capture devices to select appropriate image capture positions for capturing two-dimensional images for use in generating three-dimensional images. Further, it is desirable to provide methods and systems for altering the depth perceived in three-dimensional images.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description of Illustrative Embodiments. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Disclosed herein are methods, systems, and computer-readable storage media for generating three-dimensional (3D) images of a scene. According to an aspect, a method includes using at least one processor and at least one image capture device for capturing a real-time image and a first still image of a scene. Further, the method includes displaying the real-time image of the scene on a display. The method also can include determining one of an image sensor property, optical property, focal property, and viewing property of the captured images. The method also includes calculating one of camera positional offset and pixel offset indicia in a real-time display of the scene to indicate a target camera positional offset with respect to the first still image based on the captured images and potentially one of the image sensor property, optical property, focal property, and viewing property of the captured images. Further, the method includes determining that the at least one capture device is in a position of the target camera positional offset. The method also includes capturing a second still image. Further, the method includes correcting the captured first and second still images to compensate for at least one of camera vertical shift, vertical tilt, horizontal tilt, and rotation. The method also includes generating the three-dimensional image based on the corrected first and second still images.
According to another aspect, a method for generating a three-dimensional image includes using at least one processor for receiving, from an image capture device, a plurality of images of a scene from different positions from an image capture device. The method also includes determining attributes of the images. Further, the method includes generating, based on the attributes, a pair of images from the plurality of images for use in generating a three-dimensional image. The method also includes correcting the pair of images to compensate for one of camera vertical shift, vertical tilt, horizontal tilt, and rotation. Further, the method includes generating a three-dimensional image based on the corrected pair of images.
The foregoing summary, as well as the following detailed description of preferred embodiments, is better understood when read in conjunction with the appended drawings. For the purposes of illustration, there is shown in the drawings exemplary embodiments; however, the presently disclosed subject matter is not limited to the specific methods and instrumentalities disclosed. In the drawings:
The presently disclosed subject matter is described with specificity to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or elements similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the term “step” may be used herein to connote different aspects of methods employed, the term should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described.
Embodiments of the presently disclosed subject matter are based on technology that allows a user to capture a plurality of different images of the same object within a scene and to generate one or more stereoscopic images using the different images. Particularly, methods in accordance with the present subject matter provide assistance to camera users in capturing pictures that can be subsequently converted into high-quality three-dimensional images. The functions disclosed herein can be implemented in hardware and/or software that can be executed within, for example, but not limited to, a digital still camera, a video camera (or camcorder), a personal computer, a digital picture frame, a set-top box, an HDTV, a phone, or the like. A mechanism to automate the image capture procedure is also described herein.
Methods, systems, and computer program products for selecting an image capture position to generate a three-dimensional image in accordance with embodiments of the present subject matter are disclosed herein. According to one or more embodiments of the present subject matter, a method includes determining a plurality of first guides associated with a first still image of a scene. The method can also include displaying a real-time image of the scene on a display. Further, the method can include determining a plurality of second guides associated with the real-time image. The method can also include displaying the first and second guides on the display for guiding selection of a position of an image capture device to automatically or manually capture a second still image of the scene, as well as any images in between in case the image capture device is set in a continuous image capturing mode, for pairing any of the captured images as a stereoscopic pair of a three-dimensional image. Such three-dimensional images can be viewed or displayed on a suitable stereoscopic display.
The functions and methods described herein can be implemented on a device capable of capturing still images, displaying three-dimensional images, and executing computer executable instructions on a processor. The device may be, for example, a digital still camera, a video camera (or camcorder), a personal computer, a digital picture frame, a set-top box, an HDTV, a phone, or the like. The functions of the device may include methods for rectifying and registering at least two images, matching the color and edges of the images, identifying moving objects, removing or adding moving objects from or to the images to equalize them, altering the perceived depth of objects, and any final display-specific transformation to generate a single, high-quality three-dimensional image. The techniques described herein may be applied to still-captured images and video images, which can be thought of as a series of images; hence for the purpose of generalization the majority of the description herein is limited to still-captured image processing.
Methods, systems, and computer program products for generating one or more three-dimensional images of a scene are disclosed herein. The three-dimensional images can be viewed or displayed on a stereoscopic display. The three-dimensional images may also be viewed or displayed on any other display capable of presenting three-dimensional images to a person using other suitable equipment, such as, but not limited to, three-dimensional glasses. In addition, the functions and methods described herein may be implemented on a device capable of capturing still images, displaying three-dimensional images, and executing computer executable instructions on a processor. The device may be, for example, a digital still camera, a video camera (or camcorder), a personal computer, a digital picture frame, a set-top box, an HDTV, a phone, or the like. Such devices may be capable of presenting three-dimensional images to a person without additional equipment, or if used in combination with other suitable equipment such as three-dimensional glasses. The functions of the device may include methods for rectifying and registering at least two images, matching the color and edges of the images, identifying moving objects, removing or adding moving objects from or to the images to equalize them, altering a perceived depth of objects, and any final display-specific transformation to generate a single, high-quality three-dimensional image. The techniques described herein may be applied to still-captured images and video images, which can be thought of as a series of images; hence for the purpose of generalization the majority of the description herein is limited to still-captured image processing.
In accordance with embodiments, systems and methods disclosed herein can generate and/or alter a depth map for an image using a digital still camera or other suitable device. Using the depth map for the image, a stereoscopic image pair and its associated depth map may be rendered. These processes may be implemented by a device such as a digital camera or any other suitable image processing device.
It should be noted that any of the processes and steps described herein may be implemented in an automated fashion. For example, any of the methods and techniques described herein may be automatically implemented without user input after the capture of a plurality of images.
Referring to
The memory 104 and the CPU 106 may be operable together to implement an image generator function 114 for generating three-dimensional images in accordance with embodiments of the presently disclosed subject matter. The image generator function 114 may generate a three-dimensional image of a scene using two or more images of the scene captured by the device 100.
The method of
The method of
The generated two or more images may also be suitably processed 206. For example, the images may be corrected and adjusted for display as described herein.
The method of
Although the above examples are described for use with a device capable of capturing images, embodiments of the present subject matter described herein are not so limited. Particularly, the methods described herein for generating a three-dimensional image of a scene may for example be implemented in any suitable system including a memory and computer processor. The memory may have stored therein computer-executable instructions. The computer processor may execute the computer-executable instructions. The memory and computer processor may be configured for implementing methods in accordance with embodiments of the subject matter described herein.
Images suitable for use as a three-dimensional image may be captured by a user using any suitable technique. For example,
In another example,
The distance between positions at which images are captured (the stereo baseline) for generating a three-dimensional image can affect the quality of the three-dimensional image. The optimal stereo baseline between the camera positions can vary anywhere between 3 centimeters (cm) and several feet, dependent upon a variety of factors, including the distance of the closest objects in frame, the lens focal length or other optics properties of the camera, the camera crop factor (dependent on sensor size), the size and resolution of the display on which the images will be viewed, and the distance from the display at which viewers will view the images. A general recommendation is that the stereo baseline should not exceed the distance defined by the following equation:
where B is the stereo baseline separation in inches, D is the distance in feet to the nearest object in frame, F is the focal length of the lens in millimeters (mm), and C is the camera crop factor relative to a full frame (36×24 square mm.) digital sensor (which approximates the capture of a 35 mm analog camera). In the examples provided herein, it is assumed that at least two images have been captured, at least two of which can be interpreted as a stereoscopic pair.
The identification of stereo pairs in 302 is bypassed in the cases where the user has manually selected the image pair for 3D image registration. This bypass can also be triggered if a 3D-enabled capture device is used that identifies the paired images prior to the registration process. Returning to
A preliminary, quick analysis may be utilized for determining whether images among the plurality of captured images are similar enough to warrant a more detailed analysis. This analysis may be performed by, for example, the image generator function 114 shown in
The method of
The method of
Image pair is not stereoscopic=ABS(AVm−AVm+1)>ThresholdAV
OR
For all k, ABS(SAVk,m−SAVk,m+1)>ThresholdSAV
OR
ABS(MAXm−MAXm+1)>ThresholdMAX
OR
ABS(MINm−MINm+1)>ThresholdMIN
ThresholdAV, ThresholdSAV, ThresholdMAX, and ThresholdMIN are threshold value levels for the average, segmented average, maximum and minimum, respectively. These equations can be applied to all or at least some of the colors.
The method of
Referring again to
Using the results of the motion estimation process used for object similarity evaluation, vertical displacement can be assessed. Vertical motion vector components are indicative of vertical parallax between the images, which when large can indicate a poor image pair. Vertical parallax must be corrected via rectification and registration to allow for comfortable viewing, and this correction will reduce the size of the overlapping region of the image in proportion to the original amount of vertical parallax.
Using the motion vectors from the similarity of objects check, color data may be compared to search for large changes between images. Such large changes can represent a color difference between the images regardless of similar luminance.
The method of
Referring to
For each vertical edge in one image, determine the closest edge in the other image, subject to meeting criteria for length, slope and curvature. For distance, use the distance between the primary points. If this distance is larger than c, it is deemed that no edge matches, and this edge contributes ε to the cost function. The end result of the optimization is the determination of δ, the optimal shift between the two images based on this vertical edge matching. In box 622, the same optimization process from box 620 is repeated; this time, however, is for horizontal edge matching, and utilizes the vertical γ already determined from box 620.
In an example for block 622, the following equation may be used:
Block 624 then uses the calculated horizontal and vertical δ's to match each edge with its closest edge that meets the length, slope and curvature criteria. In an example for block 624, the following equation may be used:
The output of this stage is the matrix C, which has 1 in location i,j if edge i and j are matching edges and otherwise 0. This matrix is then pruned in Box 626 so that no edge is matched with multiple other edges. In the event of multiple matches, the edge match with minimal distance is used. Finally, in Box 628, the edge matches are broken down into regions of the image. The set of matching edges within each region are then characterized by the mean shift, and this mean shift is then the characteristic shift of the region. By examining the direction of the shifts of each subregion, it is thus possible to determine which picture is left and which is right. It is also possible to determine whether the second captured picture was captured with a focal axis parallel to the first picture. If not, there is some amount of toe-in or toe-out which can be characterized by the directional shifts of the subregions.
Referring to
A Hough transform can be applied 306 to identify lines in the two images of the potential stereoscopic pair. Lines that are non-horizontal, non-vertical, and hence indicate some perspective in the image can be compared between the two images to search for perspective changes between the two views that may indicate a perspective change or excessive toe-in during capture of the pair.
The aforementioned criteria may be applied to scaled versions of the original images for reducing computational requirements. The results of each measurement may be gathered, weighted, and combined to make a final decision regarding the probable quality of a given image pair as a stereoscopic image pair.
The method of
At step 804, color segmentation is performed on the objects. At step 806, the bounding box of 8×8 blocks for each object in each image may be identified. At step 810, images may be partitioned into N×N blocks. At step 812, blocks with high information content may be selected. At step 813, the method includes performing motion estimation on blocks in L relative to R image (accumulate motion vectors for L/R determination. These steps may be considered Techniques 1, 2, and 3.
At step 814, edge detection may be performed on left/right images. Next, at step 816, vertical and horizontal lines in left/right images may be identified and may be classified by length, location, and slope. At step 818, a Hough transform may be performed on the left/right images. Next, at step 820, the method includes analyzing Hough line slope for left/right images and identifying non-vertical and non-horizontal lines.
Referring to
At step 822, the following calculations may be performed for all objects or blocks of interest and lines:
At step 824, a weighted average of the above measures may be performed to determine whether images are a pair or not. Next, at step 826, average motion vector direction may be used to determine left/right images.
Referring again to
For a stereo pair of left and right view images, the method of
For a stereo pair of left and right view images with a set of identified interest points, rectification 318 may be performed on the stereo pair of images. Using the interest point set for the left view image, motion estimation techniques (as described in stereo pair identification above) and edge matching techniques are applied to find the corresponding points in the right view image.
and
and the fundamental matrix equation
rightptsT*F*leftpts=0
is solved or approximated to determine the 3×3 fundamental matrix, F, and epipoles, e1 and e2. The camera epipoles are used with the interest point set to generate a pair of rectifying homographies. It can be assumed that the camera properties are consistent between the two captured images. The respective homographies are then applied to the right and left images, generating the rectified images. The overlapping rectangular region of the two rectified images is then identified, the images are cropped to this rectangle, and the images are resized to their original dimensions, generating the rectified image pair, right_r and left_r. The rectified image pair can be defined by the following equations:
right_r=cropped(F*right)
left_r=cropped(F*left)
For the stereo pair of “left_r” and “right_r” images, registration is next performed on the stereo pair. A set of interest points is required, and the interest point set selected for rectification (or a subset thereof) may be translated to positions relative to the output of the rectification process by applying the homography of the rectification step to the points. Optionally, a second set of interest points may be identified for the left_r image, and motion estimation and edge matching techniques may be applied to find the corresponding points in the right_r image. The interest point selection process for the registration operation is the same as that for rectification. Again, the N corresponding interest points are made into a 3×N set of point values as set forth in the following equations:
and
and the following matrix equation
left_rpts=Tr*right_rpts
is approximated for a 3×3 linear conformal transformation, Tr, which may incorporate both translation on the X and Y axes and rotation in the X/Y plane. The transform Tr is applied to the right_r image to generate the image “Right” as defined by the following equation:
Right′=Tr*right_r,
where right_r is organized as a 3×N set of points (xir, yir, 1) for i=1 to image rows*image cols.
Finally, the second set of interest points for the left_r image may be used to find correspondence in the Right′ image, the set of points as set forth in the following equations:
and
is identified and composed, and the equation
Right′pts=Tl*left_rpts
is approximated for a second linear conformal transformation, Tl. The transform Tl is applied to the left_r image to generate the image “Left”, as defined by the following equation:
Left′=Tl*left_r
“Right” and “Left” images represent a rectified, registered stereoscopic pair.
The method of
The method of
After steps 1010 and 1014 of
Returning now to
For a stereoscopic pair of registered “Left” and “Right” images, the screen plane of the stereoscopic image can be altered 336, or relocated, to account for disparities measured as greater than a viewer can resolve. This is performed by scaling the translational portion of transforms that created the registered image views by a percent offset and re-applying the transforms to the original images. For example, if the initial left image transform is as follows:
for scaling factor S, X/Y rotation angle θ, and translational offsets Tx and Ty, the adjustment transform becomes
where Xscale and Yscale are determined by the desired pixel adjustment relative to the initial transform adjustment, i.e.,
Only in rare occurrences will Yscale be other than zero, and only then as a corrective measure for any noted vertical parallax. Using the altered transform, a new registered image view is created, e.g. the following:
Left′=Tlalt*left_r
Such scaling effectively adds to or subtracts from the parallax for each pixel, effectively moving the point of now parallax forward or backward in the scene. The appropriate scaling is determined by the translational portion of the transform and the required adjustment.
At step 338 of
Since moving an object region in the image may result in a final image that has undefined pixel values, a pixel-fill process is required to ensure that all areas of the resultant image have defined pixel values after object movement. An exemplary procedure for this is described below. Other processes, both more or less complex, may be applied.
Rul=(xl, yu); the upper left coordinate
Rll=(xl, yl); the lower left coordinate
Rur=(xr, yu); the upper right coordinate
Rlr=(xr, yl); the lower right coordinate
For a large or complex object, multiple rectangular regions may need to be defined and moved, but the process executes identically for each region.
In an example of defining left/right bounds of a region M for left/right motion, the region M is the region to which the altered transform can be applied. This process first assesses the direction of movement to occur and defines one side of region M. If the intended movement is to the right, then the right bounding edge of region M is defined by the following coordinate pair in the appropriate left_r or right_r image (whichever is to be adjusted):
Mur=(xr+P, yu); upper right
Mlr=(xr+P, yl); lower right
If movement is to the left, the left bounding edge of region M is defined as:
Mul=(xl−P, yu); upper left
Mll=(xl−P, yl); lower left
P is an extra number of pixels for blending purposes. The scaled version of the registration transform matrix Talt is provided 1104. The inverse of the altered transform (assumed already calculated as above for movement of the screen plane for the whole image) may then be applied 1106 to the opposite edge of the region R to get the other edge of region M. For the sake of example, assume that the movement of R is intended to be to the right, and that the left image is to be altered (meaning Tlalt has been created for the intended movement). Since the right side of M is already known, the other side can now be determined as:
Mul=Tlalt−1*Rul+(P,0); upper right
Mll=Tlalt−1*Ru+(P,0); lower right
Again, P is an extra number of pixels for blending, and Tlalt−1 is the inverse transform of Tlalt. Note that P is added after the transform application, and only to the X coordinates. The region to be moved is now defined as the pixels within the rectangle defined by M.
The method also includes applying 1108 the inverse transform of Tlalt to the image to be transformed for blocks in the region M. For example, from this point, one of two operations can be used, depending on a measurement of the uniformity (texture) or the area defined by the coordinates Mul, Mll, Rul, and Rll (remembering again that the region would be using other coordinates for a movement to the left). Uniformity is measured by performing a histogram analysis on the RGB values for the pixels in this area. If the pixel variation is within a threshold, the area is deemed uniform, and the movement of the region is affected by applying the following equation: Left′=Tlalt*left_r, for left_r∈M. This is the process shown in the example method of
Left′=Tlalt*left_r, for the left_r region defined by Rul, Rll, Mur, and Mlr.
The method of
The method of
d=Rul(x)−Mul(x)
for the x-coordinates of Rul and Mul, and then proceeds to determine an interpolated gradient between the two pixel positions to fill in the missing values. For simplicity of implementation, the interpolation is always performed on a power of two, meaning that the interpolation will produce one of 1, 2, 4, 8, 16, etc. pixels as needed between the two defined pixels. Pixel regions that are not a power of two are mapped to the closest power of two, and either pixel repetition or truncation of the sequence is applied to fit. As an example, if Rul(x)=13 and Mul(x)=6, then d=7, and the following intermediate pixel gradient is calculated for a given row, j, in the region:
Since only 7 values are needed, p8 would go unused in this case, such that the following assignments would be made:
This process can repeat for each row in the empty region.
A weighted averaging the outer P “extra” pixels on each side of the rectangle with the pixel data currently in those positions is performed to blend the edges.
As an alternative to the procedure of applying movement and pixel blending to alter the parallax of an object, the disparity map calculated using the two views, “Left” and “Right′,” can be altered for the region M to reduce the disparity values in that region, and then applied to one of the “Left” or “Right” single image views to create a new view (e.g., “Left_disparity”). The result of this process is a new stereo pair (e.g., “Left” and “Left_disparity”) that recreates the depth of the original pair, but with lesser parallax for the objects within the region M. Once created in this manner, the “disparity” view becomes the new opposite image to the original, or for example, a created “Left_disparity” image becomes the new “Right” image.
Returning to
The method of
The method of identifying and compensating for moving objects consists of the following steps. For a given sequence of pictures captured between two positions, divide each picture into smaller areas and calculate motion vectors between all pictures in all areas. Calculate by a windowed moving average the global motion that results from the panning of the camera. Then subtract the area motion vector from the global motion to identify the relative motion vectors of each area in each picture. If the motion of each area is below a certain threshold, the picture is static and the first and last picture, or any other set with the desired binocular distance, can be used as left and right target pictures to form a valid stereoscopic pair that will be used for registration, rectification, and generation of a 3D picture. If the motion of any area is above an empirical threshold, then identify all other areas that have zero motion vectors and copy those areas from any of the leftmost pictures to the target left picture and any of the rightmost pictures to the target right picture.
For objects where motion is indicated and where the motion of an object is below the acceptable disparity threshold, identify the most suitable image to copy the object from, copy the object to the left and right target images and adjust the disparities as shown in the attached figure. The more frames that are captured, the less estimation is needed to determine the rightmost pixel of the right view. Most of occluded pixels can be extracted from the leftmost images. For an object that is moving in and out of the scene between the first and last picture, identify the object and completely remove it from the first picture if there is enough data in the captured sequence of images to fill in the missing pixels.
For objects where motion is indicated and where the motion is above the acceptable disparity, identify the most suitable picture from which to extract the target object and extrapolate the proper disparity information from the remaining captured pictures.
The actual object removal process involves identifying N×N blocks, with N empirically determined, to make up a bounding region for the region of “infinite” parallax, plus an additional P pixels (for blending purposes), determining the corresponding position of those blocks in the other images using the parallax values of the surrounding P pixels that have a similar gradient value (meaning that high gradient areas are extrapolated from similar edge areas and low gradient areas are extrapolated from similar surrounding flat areas), copying the blocks/pixels from the opposite locations to the intended new location, and performing a weighted averaging of the outer P “extra” pixels with the pixel data currently in those positions to blend the edges. If it is determined to remove an object, fill-in data is generated 346. Otherwise, the method proceeds to step 348.
The movement of the object 1304 is such that the disparity is unacceptable and should be corrected. In this example, the image obtained from position 1300 can be utilized for creating a three-dimensional image, and the image obtained from position 1302 can be altered for use together with the other image in creating the three-dimensional image. To correct, the object 1304 may be moved to the left (as indicated by direction arrow 1312 in
Another example of a process for adding/removing objects from a single image is illustrated in
Referring to
Referring to
As an alternative to the procedure of identifying bounding regions of 8×8 blocks around objects to be added or removed in a view, the disparity map calculated using multiple views, “Left”, “Right”, and/or the images in between, can be applied to one of the “Left” or “Right” single image views to create a new view (e.g., “Left_disparity”). The result of this process is a new stereo pair (e.g., “Left” and “Left_disparity”) that effectively recreates the depth of the original pair, but without object occlusions, movement, additions, or removals. Once created in this manner, the “disparity” view becomes the new opposite image to the original, or for example, a created “Left_disparity” image becomes the new “Right” image. Effectively, this procedure mimics segmented object removal and/or addition, but on a full image scale.
Returning to
For a finalized, color corrected, motion corrected stereoscopic image pair, the “Left” and “Right” images are ordered and rendered to a display as a stereoscopic image. The format is based on the display parameters. Rendering can require interlacing, anamorphic compression, pixel alternating, and the like.
For a finalized, color corrected, motion corrected stereoscopic image pair, the “Left” view may be compressed as the base image and the “Right” image may be compressed as the disparity difference from the “Left” using a standard video codec, differential JPEG, or the like.
The method of
When a video sequence is captured with lateral camera motion as described above, stereoscopic pairs can be found within the sequence of resulting images. Stereoscopic pairs are identified based on their distance from one another determined by motion analysis (e.g., motion estimation techniques). Each pair represents a three-dimensional picture or image, which can be viewed on a suitable stereoscopic display. If the camera does not have a stereoscopic display, the video sequence can be analyzed and processed on any suitable display device. If the video sequence is suitable for creating three-dimensional content (e.g., one or more three-dimensional images), it is likely that there are many potential stereoscopic pairs, as an image captured at a given position may form a pair with images captured at several other positions. The image pairs can be used to create three-dimensional still images or re-sequenced to create a three-dimensional video.
When creating three-dimensional still images, the user can select which images to use from the potential pairs, thereby adjusting both the perspective and parallax of the resulting images to achieve the desired orientation and depth.
Another method of creating a three-dimensional sequence includes creating stereoscopic pairs by grouping the first and last images in the sequence, followed by the second and next-to-last images, and so on until all images have been used. During playback this creates the effect of the camera remaining still while the depth of the scene decreases over time due to decreasing parallax. The three-dimensional images can also be sequenced in the opposite order so that the depth of the scene increases over time.
The generation and presentation, such as display, of three-dimensional images of a scene in accordance with embodiments of the present subject matter may be implemented by a single device or combination of devices. In one or more embodiments of the present subject matter, images may be captured by a camera such as, but not limited to, a digital camera. The camera may be connected to a personal computer for communication of the captured images to the personal computer. The personal computer may then generate one or more three-dimensional images in accordance with embodiments of the present subject matter. After generation of the three-dimensional images, the personal computer may communicate the three-dimensional images to the camera for display on a suitable three-dimensional display. The camera may include a suitable three-dimensional display. Also, the camera may be in suitable electronic communication with a high-definition television for display of the three-dimensional images on the television. The communication of the three-dimensional images may be, for example, via an HDMI connection.
In one or more other embodiments of the present subject matter, three-dimensional images may be generated by a camera and displayed by a separate suitable display. For example, the camera may capture conventional two-dimensional images and then use the captured images to generate three-dimensional images. The camera may be in suitable electronic communication with a high-definition television for display of the three-dimensional images on the television. The communication of the three-dimensional images may be, for example, via an HDMI connection.
In accordance with embodiments of the presently disclosed subject matter, the memory 104 and the CPU 106 shown in
The method of
The method of
The method of
The method of
Although the above examples are described for use with a device capable of capturing images, embodiments described herein are not so limited. Particularly, the methods described herein for assisting a camera user to generate a three-dimensional image of a scene may, for example, be implemented in any suitable system including a memory and computer processor. The memory may have stored therein computer-executable instructions. The computer processor may execute the computer-executable instructions. The memory and computer processor may be configured for implementing methods in accordance with embodiments of the present disclosure.
In accordance with embodiments of the present disclosure, a user may create high-quality, three-dimensional content using a standard digital still, video camera (or cameras), other digital camera equipment or devices (e.g., a camera-equipped mobile phone), or the like. In order to generate a good three-dimensional picture or image, a plurality of images of the same object can be captured from varied positions. In an example, in order to generate three-dimensional images, a standard digital still or video camera (or cameras) can be used to capture a plurality of pictures with the following guidelines. The user uses the camera to capture an image, and then captures subsequent pictures after moving the camera left or right from its original location. These pictures may be captured as still images or as a video sequence.
where B is the stereo baseline separation in inches, D is the distance in feet to the nearest object in frame, F is the focal length of the lens in millimeters (mm), and C is the camera crop factor relative to a full frame (36×24 square mm) digital sensor (which approximates the capture of a 35 mm analog camera). In the examples provided herein, it is assumed that at least two images have been captured, at least two of which can be interpreted as a stereoscopic pair.
Embodiments of the present disclosure define a “stereoscopic mode,” which may be used in conjunction with a standard digital still camera, standard video camera, other digital camera, or the like to assist the camera user in performing the function of capturing images that ultimately yield high-quality, three-dimensional images.
The method of
In turn, the near field depth of field (Dn) for an image can be approximated for a given focus distance (d) using the following equation:
(for moderate to large d), and the far field DOF (Df) as
for d<H. For values of d>=H, the far field DOF is infinite.
Since the focus distance, focal length, and aperture are recorded at the time of capture, and the circle of confusion value is known for a given camera sensor format, the closest focused object can be assumed to be at the distance Dn, while the furthest focused pixels are at Df.
In addition to this depth calculation, edge and feature point extraction may be performed on the image to identify interest points for later use. To reduce the complexity of this evaluation, the image may be down-scaled to a reduced resolution before subsequent processing. An edge detection operation is performed on the resultant image, and a threshold operation is applied to identify the most highly defined edges at a given focus distance. Finally, edge crossing points are identified. This point set, IP, represents primary interest points at the focused depth(s) of the image.
The stereoscopic camera assist method then uses the depth values Dn and Df to determine the ideal distance to move right or left between the first and subsequent image captures. The distance to move right or left between the first and subsequent image captures is the position offset. It is assumed that the optimal screen plane is some percentage, P, behind the nearest sharp object in the depth of field, or at
Ds=(Dn*(1+P/100)),
where P is a defined percentage that may be camera and/or lens dependent. At the central point of this plane, an assumed point of eye convergence, there will be zero parallax for two registered stereoscopic images. Objects in front of and behind the screen plane will have increasing amounts of disparity as the distance from the screen increases (negative parallax for objects in front of the screen, positive parallax for object behind the screen).
The value D s gives the value of R. Hence, the binocular distance indicated to the user to move before the second/last capture is estimated as
or for default θ=2°, and
for B and Ds measured in inches (or centimeters, or any consistent unit).
The method of
The value S is calculated using the value Ds (converted to mm) and the angle of view (V) for the capture. The angle of view (V) is given by the equation
for the width of the image sensor (W) and the focal length (F). Knowing V and Ds, the width of the field of view (WoV) can be calculated as
WoV=2*Ds*tan(V/2)=Ds*W/F.
The width of view for the right eye capture is the same. Hence, if the right eye capture at the camera is to be offset by the binocular distance B, and the central point of convergence is modeled as B/2, the position of the central point of convergence in each of WoV1 and WoV2 (the width of view of images 1 and 2, respectively) can be calculated. Within WoV1, the central point of convergence will lie at a position
Conversely, within WoV2, the central point of convergence will lie at a position
and X2 is the similar coordinate for the right image to be captured, calculated as
where Pw is the image width in pixels. Finally, S is calculated as
Since W, F, and Pw are camera-specific quantities, the only specified quantity is the modeled convergence angle, θ, as noted typically 1-2 degrees. The value S may need to be scaled for use with a given display, due to the potentially different resolution of the display and the camera sensor.
In the case where guides beyond displacement and vertical alignment are generated (assisting with perspective alignment, rotation prevention, and the prevention of camera toe-in),
In accordance with other embodiments of user alignment assistance, one or more windows 2718 may be displayed which contain different alignment guides 2720 to assist the user in moving the camera for capturing the second image. The windows 2718 may include live views of the scene and alignment guides 2720 that are calculated based on various objects 2722 in the image. A feature may also be available which allows the user to control the zoom factor of one or more windows 2724 in order to improve viewing of the enclosed objects 2726 and alignment guides 2728, thus facilitating camera alignment in accordance with embodiments of the presently disclosed disclosure.
Note that although the convergent point at a distance D s should have zero parallax, the individual image captures do not capture the convergent center as the center of their image. To obtain the convergent view, registration of the image pair after capture must be performed.
Referring to
If the camera monitoring feature is activated, the device 100 may analyze the currently viewed image (step 2218). For example, in this mode, the device 100 continues to monitor the capture window as the user moves the camera in different positions to capture the second/last picture. The device 100 analyzes the image and determines if an ideal location has been reached and the camera is aligned (step 2220). If the ideal location has not been reached and the camera is not aligned, the device 100 may adjust directional feedback relative to its current camera position (step 2222). If the ideal location has not been reached and the camera is not aligned, the second image may be captured automatically when the calculated binocular distance is reached as indicated by proper alignment of the region of interest with the current live view data, and any assistance lines, such as those generated by Hough transform (step 2224).
Although the camera may be moved manually, a mechanism may automate the movement process. For example,
At step 2906, the camera 2804 may use optics, focus, depth of field information, user parallax preference, and/or the like to determine position offset for the next image. For example, after the first image is captured, the camera 2804 may communicate feedback information about the movement needed for the second/last shot to the motor controller. The motor 2802 may then move the camera 2804 to a new location along the rails 2806 according to the specified distance (step 2908). When the calculated camera position is reached, the last image may be captured automatically with settings to provide the same exposure as the first image (step 2910). The camera 2804 may then be moved back to the home position (step 2912). Any of the captured images may be used to form stereoscopic pairs used to create three-dimensional images. All of the calculations required to determine the required camera movement distance are the same as those above for manual movement, although the process simplifies since the mount removes the possibility of an incorrect perspective change (due to camera toe-in) that would otherwise have to be analyzed.
The subject matter disclosed herein may be implemented by a digital still camera, a video camera, a mobile phone, a smart phone, phone, tablet, notebook, laptop, personal computer, computer server, and the like. In order to provide additional context for various aspects of the disclosed subject matter,
Generally, however, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular data types. The operating environment 3000 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the subject matter disclosed herein. Other well-known computer systems, environments, and/or configurations that may be suitable for use with the presently disclosed subject matter include but are not limited to, personal computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include the above systems or devices, and the like.
With reference to
The system bus 3008 can be any of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, 11-bit bus, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MCA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), and Small Computer Systems Interface (SCSI).
The system memory 3006 includes volatile memory 3010 and nonvolatile memory 3012. The basic input/output system (BIOS), containing the basic routines to transfer information between elements within the computer 3002, such as during start-up, is stored in nonvolatile memory 3012. By way of illustration, and not limitation, nonvolatile memory 3012 can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory. Volatile memory 3010 includes random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM).
Computer 3002 also includes removable/nonremovable, volatile/nonvolatile computer storage media.
It is to be appreciated that
A user enters commands or information into the computer 3002 through input device(s) 3026. Input devices 3026 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect to the processing unit 3004 through the system bus 3008 via interface port(s) 3028. Interface port(s) 3028 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB). Output device(s) 3030 use some of the same type of ports as input device(s) 3026. Thus, for example, a USB port may be used to provide input to computer 3002 and to output information from computer 3002 to an output device 3030. Output adapter 3032 is provided to illustrate that there are some output devices 3030 like monitors, speakers, and printers among other output devices 3030 that require special adapters. The output adapters 3032 include, by way of illustration and not limitation, video and sound cards that provide a means of connection between the output device 3030 and the system bus 3008. It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) 3034.
Computer 3002 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 3034. The remote computer(s) 3034 can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device or other common network node and the like, and typically includes many or all of the elements described relative to computer 3002. For purposes of brevity, only a memory storage device 3036 is illustrated with remote computer(s) 3034. Remote computer(s) 3034 is logically connected to computer 3002 through a network interface 3038 and then physically connected via communication connection 3040. Network interface 3038 encompasses communication networks such as local-area networks (LAN) and wide-area networks (WAN). LAN technologies include Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), Ethernet/IEEE 1102.3, Token Ring/IEEE 1102.5 and the like. WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL).
Communication connection(s) 3040 refers to the hardware/software employed to connect the network interface 3038 to the bus 3008. While communication connection 3040 is shown for illustrative clarity inside computer 3002, it can also be external to computer 3002. The hardware/software necessary for connection to the network interface 3038 includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and Ethernet cards.
The various techniques described herein may be implemented with hardware or software or, where appropriate, with a combination of both. Thus, the methods and apparatus of the disclosed embodiments, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter. In the case of program code execution on programmable computers, the computer will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device and at least one output device. One or more programs are preferably implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language, and combined with hardware implementations.
The described methods and apparatus may also be embodied in the form of program code that is transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via any other form of transmission, wherein, when the program code is received and loaded into and executed by a machine, such as an EPROM, a gate array, a programmable logic device (PLD), a client computer, a video recorder or the like, the machine becomes an apparatus for practicing the presently disclosed subject matter. When implemented on a general-purpose processor, the program code combines with the processor to provide a unique apparatus that operates to perform the processing of the presently disclosed subject matter.
While the embodiments have been described in connection with the preferred embodiments of the various figures, it is to be understood that other similar embodiments may be used or modifications and additions may be made to the described embodiment for performing the same function without deviating therefrom. Therefore, the disclosed embodiments should not be limited to any single embodiment, but rather should be construed in breadth and scope in accordance with the appended claims.
This application is a continuation application of U.S. utility patent application Ser. No. 16/129,273, filed Sep. 12, 2018, which is a continuation application of U.S. utility patent application Ser. No. 15/193,623 (now U.S. Pat. No. 10,080,012), filed Jun. 27, 2016, which is a continuation application of U.S. utility patent application Ser. No. 13/115,459 (now U.S. Pat. No. 9,380,292), filed May 25, 2011, which is a continuation-in-part application of U.S. utility patent application Ser. No. 12/842,084 (now U.S. Pat. No. 8,508,580), filed Jul. 23, 2010, which claims the benefit of U.S. provisional patent application Ser. No. 61/230,131, filed Jul. 31, 2009; the disclosures of which are incorporated herein by reference in their entireties; said U.S. utility patent application Ser. No. 13/115,459 (now U.S. Pat. No. 9,380,292), filed May 25, 2011, is a continuation-in-part application of U.S. utility patent application Ser. No. 12/842,171 (now U.S. Pat. No. 8,436,893), filed Jul. 23, 2010, which claims the benefit of U.S. provisional patent application Ser. No. 61/230,133, filed Jul. 31, 2009; the disclosures of which are incorporated herein by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
3503316 | Takao et al. | Mar 1970 | A |
3953869 | Lo et al. | Apr 1976 | A |
4661986 | Adelson | Apr 1987 | A |
4748509 | Otake et al. | May 1988 | A |
4956705 | Wright | Sep 1990 | A |
4980762 | Teeger et al. | Dec 1990 | A |
5043806 | Choquet et al. | Aug 1991 | A |
5107293 | Sekine et al. | Apr 1992 | A |
5144442 | Ginosar et al. | Sep 1992 | A |
5151609 | Nakagawa et al. | Sep 1992 | A |
5305092 | Mimura et al. | Apr 1994 | A |
5369735 | Thier et al. | Nov 1994 | A |
5444479 | Fernekes et al. | Aug 1995 | A |
5511153 | Azarbayejani et al. | Apr 1996 | A |
5530774 | Fogel | Jun 1996 | A |
5548667 | Tu | Aug 1996 | A |
5561718 | Trew et al. | Oct 1996 | A |
5603687 | Hori et al. | Feb 1997 | A |
5613048 | Chen et al. | Mar 1997 | A |
5652616 | Chen et al. | Jul 1997 | A |
5673081 | Yamashita et al. | Sep 1997 | A |
5678089 | Bacs et al. | Oct 1997 | A |
5682437 | Okino et al. | Oct 1997 | A |
5682563 | Shinohara et al. | Oct 1997 | A |
5719954 | Onda | Feb 1998 | A |
5734743 | Matsugu et al. | Mar 1998 | A |
5748199 | Palm | May 1998 | A |
5777666 | Tanase et al. | Jul 1998 | A |
5808664 | Yamashita et al. | Sep 1998 | A |
5874988 | Gu | Feb 1999 | A |
5883695 | Paul | Mar 1999 | A |
5953054 | Mercier | Sep 1999 | A |
5963247 | Banitt | Oct 1999 | A |
5991551 | Bacs et al. | Nov 1999 | A |
6018349 | Szeliski et al. | Jan 2000 | A |
6023588 | Ray et al. | Feb 2000 | A |
6031538 | Chupeau et al. | Feb 2000 | A |
6047078 | Kang | Apr 2000 | A |
6064759 | Buckley et al. | May 2000 | A |
6075905 | Herman et al. | Jun 2000 | A |
6094215 | Sundahl et al. | Jul 2000 | A |
6215516 | Ma et al. | Apr 2001 | B1 |
6215899 | Morimura et al. | Apr 2001 | B1 |
6240198 | Rehg et al. | May 2001 | B1 |
6246412 | Shum et al. | Jun 2001 | B1 |
6269172 | Rehg et al. | Jul 2001 | B1 |
6269175 | Hanna et al. | Jul 2001 | B1 |
6278460 | Myers et al. | Aug 2001 | B1 |
6314211 | Kim et al. | Nov 2001 | B1 |
6324347 | Bacs et al. | Nov 2001 | B1 |
6381302 | Berestov | Apr 2002 | B1 |
6384859 | Matsumoto et al. | May 2002 | B1 |
6385334 | Saneyoshi et al. | May 2002 | B1 |
6414709 | Palm et al. | Jul 2002 | B1 |
6434278 | Hashimoto | Aug 2002 | B1 |
6445833 | Murata et al. | Sep 2002 | B1 |
6496598 | Harman | Dec 2002 | B1 |
6512892 | Montgomery et al. | Jan 2003 | B1 |
6556704 | Chen | Apr 2003 | B1 |
6559846 | Uyttendaele et al. | May 2003 | B1 |
6611268 | Szeliski et al. | Aug 2003 | B1 |
6611664 | Kochi et al. | Aug 2003 | B2 |
6661913 | Zhang et al. | Dec 2003 | B1 |
6677981 | Mancuso et al. | Jan 2004 | B1 |
6677982 | Chen et al. | Jan 2004 | B1 |
6686926 | Kaye | Feb 2004 | B1 |
6747610 | Taima et al. | Jun 2004 | B1 |
6750904 | Lambert | Jun 2004 | B1 |
6760488 | Aguiar | Jul 2004 | B1 |
6798406 | Jones et al. | Sep 2004 | B1 |
6847392 | House | Jan 2005 | B1 |
6862364 | Berestov | Mar 2005 | B1 |
6927769 | Roche | Aug 2005 | B2 |
6947059 | Pierce et al. | Sep 2005 | B2 |
6967659 | Jayavant et al. | Nov 2005 | B1 |
6970591 | Lyons et al. | Nov 2005 | B1 |
6978051 | Edwards | Dec 2005 | B2 |
7027642 | Weise et al. | Apr 2006 | B2 |
7046840 | Chang et al. | May 2006 | B2 |
7054478 | Harman | May 2006 | B2 |
7068275 | Nakamura et al. | Jun 2006 | B2 |
7081892 | Alkouh | Jul 2006 | B2 |
7103213 | Hirvonen et al. | Sep 2006 | B2 |
7108657 | Irion et al. | Sep 2006 | B2 |
7113632 | Lee et al. | Sep 2006 | B2 |
7116323 | Kaye et al. | Oct 2006 | B2 |
7116324 | Kaye et al. | Oct 2006 | B2 |
7126598 | Oh et al. | Oct 2006 | B2 |
7164790 | Zhang et al. | Jan 2007 | B2 |
7180536 | Wolowelsky et al. | Feb 2007 | B2 |
7181061 | Kawano et al. | Feb 2007 | B2 |
7196730 | Mihelcic | Mar 2007 | B2 |
7197192 | Edwards | Mar 2007 | B2 |
7203356 | Gokturk et al. | Apr 2007 | B2 |
7215809 | Sato et al. | May 2007 | B2 |
7218757 | Franz | May 2007 | B2 |
7224357 | Chen et al. | May 2007 | B2 |
7224382 | Baker | May 2007 | B2 |
7233699 | Wenzel | Jun 2007 | B2 |
7245768 | Harman et al. | Jul 2007 | B1 |
7260243 | Shibayama | Aug 2007 | B2 |
7321374 | Naske | Jan 2008 | B2 |
7349006 | Sato et al. | Mar 2008 | B2 |
7373017 | Edwards et al. | May 2008 | B2 |
7397481 | Endo et al. | Jul 2008 | B2 |
7400782 | Zhou et al. | Jul 2008 | B2 |
7404645 | Margulis | Jul 2008 | B2 |
7409105 | Jin et al. | Aug 2008 | B2 |
7466336 | Regan et al. | Dec 2008 | B2 |
7483590 | Nielsen et al. | Jan 2009 | B2 |
7489812 | Fox et al. | Feb 2009 | B2 |
7508977 | Lyons et al. | Mar 2009 | B2 |
7512883 | Wallick et al. | Mar 2009 | B2 |
7515759 | Sun | Apr 2009 | B2 |
7538876 | Hewitt et al. | May 2009 | B2 |
7551770 | Harman | Jun 2009 | B2 |
7557824 | Holliman | Jul 2009 | B2 |
7573475 | Sullivan et al. | Aug 2009 | B2 |
7573489 | Davidson et al. | Aug 2009 | B2 |
7580463 | Routhier et al. | Aug 2009 | B2 |
7605776 | Satoh et al. | Oct 2009 | B2 |
7616886 | Matsumura et al. | Nov 2009 | B2 |
7619656 | Ben-Ezra et al. | Nov 2009 | B2 |
7639838 | Nims | Dec 2009 | B2 |
7643062 | Silverstein et al. | Jan 2010 | B2 |
7680323 | Nichani | Mar 2010 | B1 |
7693221 | Routhier et al. | Apr 2010 | B2 |
7701506 | Silverbrook | Apr 2010 | B2 |
7705970 | Piestun et al. | Apr 2010 | B2 |
7711181 | Kee et al. | May 2010 | B2 |
7711201 | Wong et al. | May 2010 | B2 |
7711221 | Burgi et al. | May 2010 | B2 |
7768702 | Hirose et al. | Aug 2010 | B2 |
7817187 | Silsby et al. | Oct 2010 | B2 |
7844001 | Routhier et al. | Nov 2010 | B2 |
7857455 | Cowan et al. | Dec 2010 | B2 |
7873207 | Tsubaki | Jan 2011 | B2 |
7876948 | Wetzel et al. | Jan 2011 | B2 |
8274552 | Dahi et al. | Sep 2012 | B2 |
8436893 | McNamer | May 2013 | B2 |
8441520 | Dahi et al. | May 2013 | B2 |
8456515 | Li et al. | Jun 2013 | B2 |
8508580 | McNamer et al. | Aug 2013 | B2 |
8633967 | Kamins-Naske et al. | Jan 2014 | B2 |
8649660 | Bonarrigo et al. | Feb 2014 | B2 |
8810635 | McNamer et al. | Aug 2014 | B2 |
9380292 | McNamer et al. | Jun 2016 | B2 |
10200671 | Dahi et al. | Feb 2019 | B2 |
20020106120 | Brandenburg et al. | Aug 2002 | A1 |
20020158873 | Williamson | Oct 2002 | A1 |
20020190991 | Efran et al. | Dec 2002 | A1 |
20020191841 | Harman | Dec 2002 | A1 |
20030002870 | Baron | Jan 2003 | A1 |
20030030636 | Yamaoka | Feb 2003 | A1 |
20030103683 | Horie | Jun 2003 | A1 |
20030151659 | Kawano et al. | Aug 2003 | A1 |
20030152264 | Perkins | Aug 2003 | A1 |
20040100565 | Chen et al. | May 2004 | A1 |
20040105074 | Soliz | Jun 2004 | A1 |
20040135780 | Nims | Jul 2004 | A1 |
20040136571 | Hewitson et al. | Jul 2004 | A1 |
20040218269 | Divelbiss et al. | Nov 2004 | A1 |
20050041123 | Ansari et al. | Feb 2005 | A1 |
20050100192 | Fujimura et al. | May 2005 | A1 |
20050191048 | Ramadan | Sep 2005 | A1 |
20050201612 | Park et al. | Sep 2005 | A1 |
20060008268 | Suwa | Jan 2006 | A1 |
20060098865 | Yang et al. | May 2006 | A1 |
20060120712 | Kim | Jun 2006 | A1 |
20060203335 | Martin et al. | Sep 2006 | A1 |
20060210111 | Cleveland et al. | Sep 2006 | A1 |
20060221072 | Se et al. | Oct 2006 | A1 |
20060221179 | Seo et al. | Oct 2006 | A1 |
20060222260 | Sambongi et al. | Oct 2006 | A1 |
20070024614 | Tam et al. | Feb 2007 | A1 |
20070047040 | Ha | Mar 2007 | A1 |
20070064098 | Tran | Mar 2007 | A1 |
20070146232 | Redert et al. | Jun 2007 | A1 |
20070165129 | Hill | Jul 2007 | A1 |
20070165942 | Jin et al. | Jul 2007 | A1 |
20070168820 | Kutz et al. | Jul 2007 | A1 |
20070189599 | Ryu et al. | Aug 2007 | A1 |
20070189747 | Ujisato et al. | Aug 2007 | A1 |
20070291143 | Barbieri et al. | Dec 2007 | A1 |
20080004073 | John et al. | Jan 2008 | A1 |
20080024614 | Li et al. | Jan 2008 | A1 |
20080030592 | Border et al. | Feb 2008 | A1 |
20080031327 | Wang et al. | Feb 2008 | A1 |
20080043848 | Kuhn | Feb 2008 | A1 |
20080056609 | Rouge | Mar 2008 | A1 |
20080062254 | Edwards et al. | Mar 2008 | A1 |
20080080852 | Chen et al. | Apr 2008 | A1 |
20080095402 | Kochi et al. | Apr 2008 | A1 |
20080112616 | Koo | May 2008 | A1 |
20080117289 | Schowengerdt et al. | May 2008 | A1 |
20080150945 | Wang et al. | Jun 2008 | A1 |
20080158345 | Schklair et al. | Jul 2008 | A1 |
20080180550 | Gulliksson | Jul 2008 | A1 |
20080218613 | Janson et al. | Sep 2008 | A1 |
20080240607 | Sun et al. | Oct 2008 | A1 |
20080252725 | Lanfermann et al. | Oct 2008 | A1 |
20080317379 | Steinberg et al. | Dec 2008 | A1 |
20090061381 | Durbin et al. | Mar 2009 | A1 |
20090073164 | Wells | Mar 2009 | A1 |
20090080036 | Paterson et al. | Mar 2009 | A1 |
20090116732 | Zhou et al. | May 2009 | A1 |
20090141967 | Hattori | Jun 2009 | A1 |
20090154793 | Shin et al. | Jun 2009 | A1 |
20090154823 | Ben-Ezra et al. | Jun 2009 | A1 |
20090167930 | Safaee-Rad et al. | Jul 2009 | A1 |
20090169102 | Zhang et al. | Jul 2009 | A1 |
20090290013 | Hayashi | Nov 2009 | A1 |
20090290037 | Pore | Nov 2009 | A1 |
20090295907 | Kim et al. | Dec 2009 | A1 |
20100030502 | Higgins | Feb 2010 | A1 |
20100039502 | Robinson | Feb 2010 | A1 |
20100080448 | Tam | Apr 2010 | A1 |
20100097444 | Lablans | Apr 2010 | A1 |
20100128109 | Banks | May 2010 | A1 |
20100134598 | Caron et al. | Jun 2010 | A1 |
20100142824 | Lu | Jun 2010 | A1 |
20100165152 | Lim | Jul 2010 | A1 |
20100171815 | Park et al. | Jul 2010 | A1 |
20100182406 | Benitez | Jul 2010 | A1 |
20100201682 | Quan et al. | Aug 2010 | A1 |
20100208942 | Porter et al. | Aug 2010 | A1 |
20100220932 | Zhang et al. | Sep 2010 | A1 |
20100238327 | Griffith et al. | Sep 2010 | A1 |
20100239158 | Rouge et al. | Sep 2010 | A1 |
20100295927 | Turner et al. | Nov 2010 | A1 |
20100303340 | Abraham | Dec 2010 | A1 |
20100309286 | Chen et al. | Dec 2010 | A1 |
20100309288 | Stettner et al. | Dec 2010 | A1 |
20100309292 | Ho et al. | Dec 2010 | A1 |
20110018975 | Chen et al. | Jan 2011 | A1 |
20110025825 | McNamer et al. | Feb 2011 | A1 |
20110025829 | McNamer et al. | Feb 2011 | A1 |
20110025830 | McNamer et al. | Feb 2011 | A1 |
20110050853 | Zhang et al. | Mar 2011 | A1 |
20110050859 | Kimmel et al. | Mar 2011 | A1 |
20110050864 | Bond | Mar 2011 | A1 |
20110109720 | Smolic et al. | May 2011 | A1 |
20110157320 | Oyama | Jun 2011 | A1 |
20110169921 | Lee et al. | Jul 2011 | A1 |
20110228051 | Dedeoglu et al. | Sep 2011 | A1 |
20110255775 | McNamer et al. | Oct 2011 | A1 |
20120105602 | McNamer et al. | May 2012 | A1 |
20120162374 | Markas et al. | Jun 2012 | A1 |
20120162379 | Dahi et al. | Jun 2012 | A1 |
20120314036 | Dahi et al. | Dec 2012 | A1 |
20130111464 | Markas et al. | May 2013 | A1 |
20130242059 | Dahi et al. | Sep 2013 | A1 |
20140009462 | McNamer et al. | Jan 2014 | A1 |
20190014307 | McNamer et al. | Jan 2019 | A1 |
Number | Date | Country |
---|---|---|
1056049 | Nov 2000 | EP |
1240540 | Feb 2011 | EP |
2405764 | Mar 2005 | GB |
2455316 | Jun 2009 | GB |
100653965 | Nov 2006 | KR |
9615631 | May 1996 | WO |
2005025239 | Mar 2005 | WO |
2006062325 | Jun 2006 | WO |
2008016882 | Feb 2008 | WO |
2008075276 | Jun 2008 | WO |
2009122200 | Oct 2009 | WO |
2010024479 | Mar 2010 | WO |
2010052741 | May 2010 | WO |
2010147609 | Dec 2010 | WO |
2011014419 | Feb 2011 | WO |
2011014420 | Feb 2011 | WO |
2011014421 | Feb 2011 | WO |
2011017308 | Feb 2011 | WO |
2012061549 | May 2012 | WO |
2012091878 | Jul 2012 | WO |
2012092246 | Jul 2012 | WO |
Entry |
---|
Zhang et al. “Next Generation Automatic Terrain Extraction Using Microsoft Ultracam Imagery”, BAE Systems National Security Solutions, May 2007. |
Dubuisson et al. “Fusing Color and Edge Information for Object Matching”, Michigan State University, IEEE 1994. |
Examiner-Initiated Interview Summary in related U.S. Appl. No. 16/129,273 dated Apr. 28, 2021. (1 page). |
Non-Final Office action issued in related U.S. Appl. No. 13/337,676 dated Jun. 19, 2015. (32 pages). |
Notice of Allowance issued in related U.S. Appl. No. 13/337,676 dated Jan. 8, 2016. (125 pages). |
Notice of Allowance in related U.S. Appl. No. 13/115,589 dated Jun. 14, 2012. (54 pages). |
Office Action issued in related U.S. Appl. No. 16/129,273 dated Oct. 3, 2019. (11 pages). |
Final Office Action issued in related U.S. Appl. No. 16/129,273 dated Apr. 1, 2020. (17 pages). |
Notice of Allowability issued in related U.S. Appl. No. 16/129,273 dated Mar. 31, 2021. (7 pages). |
Office Action Response in related U.S. Appl. No. 16/129,273 dated Aug. 3, 2020. (6 pages). |
Office Action Response in related U.S. Appl. No. 16/129,273 dated Feb. 3, 2021. (6 pages). |
Request for Continued Examination Transmittal in related U.S. Appl. No. 16/129,273 dated Aug. 3, 2020. (3 pages). |
Notice of Allowance issued in related U.S. Appl. No. 16/129,273 dated Feb. 18, 2021. (9 pages). |
Nonfinal Office action in related U.S. Appl. No. 16/129,273 dated Dec. 11, 2020. (8 pages). |
Amendment after Final Office action mailed by Applicant in related U.S. Appl. No. 13/337,676 dated Jun. 3, 2015. (11 pages). |
Applicant-Initiated Interview Summary dated Feb. 8, 2013—Related U.S. Appl. No. 12/842,171. (4 pages). |
Applicant-Initiated Interview Summary dated Jan. 30, 2013—Related U.S. Appl. No. 12/842,084 (6 pages). |
Applicant's Response dated Jan. 28, 2013 to U.S. Non-Final Office Action dated Oct. 26, 2012 for related U.S. Appl. No. 12/842,084. (12 pages). |
Chen, Shenchang Eric et al., View Interpolation for Image Synthesis, Proceedings of ACM SIGGRAPH, pp. 279-288, 1993 (7 pages). |
Chen, Shu-Ching et al., Video Scene Change Detection Method Using Unsupervised Segmentation and Object Tracking, IEEE International Conference on Multimedia and Expo (CME), pp. 57-60, 2001 (4 pages). |
International Search Report and Written Opinion dated Aug. 27, 2013, for related PCT Application Serial PCT/US2013/37010 (9 pages). |
International Search Report dated Jul. 18, 2012 for related application: Bahram Dahi et al.; Primary and Auxiliary Image Capture Devices for Image Processing and Related Methods filed Dec. 9, 2011 as PCT/US11/64050 (See also publication WO 2012/091878 published Jul. 5, 2012) (9 pages). |
International Search Report dated May 21, 2012 for related application: Michael McNamer et al.; Methods, Systems, and Computer Program Products for Creating Three-Dimensional Video Sequences filed Nov. 3, 2011 as PCT/US11/59057 (See also publication WO/2012/061549 published May 10, 2012) (8 pages). |
International Search Report dated Sep. 7, 2012 for related application: Tassos Markas et al.; Methods, Systems, and Computer-Readable Storage Media for Identifying a Rough Depth Map in a Scene and for Determining a Stereo-Base Distance for Three-Dimensional (3D) Content Creation filed Dec. 27, 2011 as PCT/US11/67349 (See also publication WO 2012/092246 published Jul. 5, 2012) (9 pages). |
International Preliminary Report on Patentability dated May 16, 2013, for related application No. PCT/US2011/059057 filed Nov. 3, 2011 (4 pages). |
Issue Notification dated Apr. 24, 2013 for related U.S. Appl. No. 13/584,744 (1 page). |
Joly, Phillippe et al., Efficient automatic analysis of camera work and microsegmentation of video using spatiotemporal images, 1996, Signal Process Image Communication, pp. 295-307. (13 pages). |
Krotkov, Eric et al., An Agile Stereo Camera System for Flexible Image Acquisition, 1988, IEEE, vol. 4, pp. 108-113 (6 pages). |
Mahajan et al., “Moving Gradients: A Path-Based Method for Plausible Image Interpolation”, Proceedings of ACM SIGGRAPH 2009, vol. 28, Issue 3 (Aug. 2009). (10 pages). |
McMillian, Jr., Leonard, An Image-Based Approach to Three-Dimensional Computer Graphics, PhD. Dissertation submitted to the University of North Carolina at Chapel Hill, Apr. 1997. (206 pages). |
Noll, Tobias et al., Markerless Camera Pose Estimation—An Overview, Visualization of Large and Unstructured Data Sets—IRTG Workshop, pp. 45-54, 2010 (10 pages). |
Notice of Allowance dated Apr. 9, 2013 for related U.S. Appl. No. 13/584,744. (10 pages). |
Notice of Allowance dated Mar. 26, 2013 for related U.S. Appl. No. 12/842,171. (34 pages). |
Non-final Office Action in related U.S. Appl. No. 13/115,589 dated Oct. 11, 2011. (2 pages). |
Examiner Interview in related U.S. Appl. No. 13/115,589 dated Oct. 11, 2011. (3 pages). |
Applicant Initiated Interview summary in related U.S. Appl. No. 13/115,589 dated Jan. 10, 2012. (3 pages). |
Applicant's response in related U.S. Appl. No. 13/337,676 dated Aug. 13, 2014. (11 pages). |
Final Rejection issued in related U.S. Appl. No. 13/337,676 dated Dec. 10, 2014. (28 pages). |
Search Report and Written Opinion for related PCT International Patent Application No. PCT/US10/43022 dated Nov. 16, 2010. (15 pages). |
Search Report and Written Opinion for related PCT International Patent Application No. PCT/US10/43023 dated Oct. 13, 2010. (11 pages). |
Search Report and Written Opinion for related PCT International Patent Application No. PCT/US10/43025 dated Feb. 9, 2012. (10 pages). |
U.S. Non-Final Office Action dated Dec. 5, 2012 for Related U.S. Appl. No. 12/842,257. (67 pages). |
U.S. Non-Final Office Action dated Nov. 26, 2012 for Related U.S. Appl. No. 12/842,171. (71 pages). |
U.S. Non-Final Office Action dated Oct. 26, 2012 for Related U.S. Appl. No. 12/842,084. (28 pages). |
Non-Final Office Action issued in counterpart U.S. Appl. No. 15/193,623 dated Oct. 13, 2017. (10 pages). |
Notice of Allowance issued in counterpart U.S. Appl. No. 15/193,623 dated May 22, 2018. (7 pages). |
Applicant's response in related U.S. Appl. No. 13/115,589 dated Nov. 9, 2011. (11 pages). |
Final rejection issued in related U.S. Appl. No. 13/115,589 dated Dec. 2, 2011. (13 pages). |
Amendment after final office action mailed by Applicant in related U.S. Appl. No. 13/115,589 dated Jan. 2, 2012. (11 pages). |
Advisory Action issued in related U.S. Appl. No. 13/115,589 dated Jan. 12, 2012. (3 pages). |
Applicant's Request for Continued Examination in related U.S. Appl. No. 13/115,589 dated Jan. 20, 2012. (3 pages). |
Non-final Office Action in related U.S. Appl. No. 13/337,676 dated Feb. 13, 2014. (25 pages). |
Applicant's response in related U.S. Appl. No. 13/337,676 dated Jun. 3, 2015. (11 pages). |
Number | Date | Country | |
---|---|---|---|
20210314547 A1 | Oct 2021 | US |
Number | Date | Country | |
---|---|---|---|
61230133 | Jul 2009 | US | |
61230131 | Jul 2009 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16129273 | Sep 2018 | US |
Child | 17351609 | US | |
Parent | 15193623 | Jun 2016 | US |
Child | 16129273 | US | |
Parent | 13115459 | May 2011 | US |
Child | 15193623 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12842084 | Jul 2010 | US |
Child | 13115459 | US | |
Parent | 12842171 | Jul 2010 | US |
Child | 13115459 | US |