Two dimensional video content, such as obtained with a video camera having a single aperture, is often either projected onto a display screen for viewing or viewed on a display designed for presenting two dimensional content. Over time, the resolution of displays has tended to increase, from standard television interlaced content resolution (e.g., 480i), to high definition television content (e.g., 1080i), to 4K definition television content (4K UHD), and even to even higher definition television content (e.g., 8K UHD). Such increases in video resolution technology only provide for limited increases in the apparent image quality to the viewer. Accordingly, the viewer is only immersed in the video experience to a limited extent.
To increase the immersive experience of the viewer it is desirable to effectively convert two dimensional image content into three dimensional image content, which is thereafter displayed on a suitable display for viewing three dimensional image content. The perception of three dimensional content may involve a third dimension of depth, which may be perceived in a form of binocular disparity by the human visual system. Since the left and the right eyes of the viewer are at different positions, each perceives a slightly different view of a field of view. The human brain may then reconstruct the depth information from these different views to perceive a three dimensional view. To emulate this phenomenon, a three dimensional display may display two or more slightly different images of each scene in a manner that presents each of the views to a different eye of the viewer. A variety of different display technologies may be used, such as for example, anaglyph three dimensional system, polarized three dimensional system, active shutter three dimensional system, head mounted stereoscopic display system, and auto stereoscopic display system.
As three dimensional display systems become more readily prevalent the desire for suitable three dimensional content to present on such displays increases. One way to generate three dimensional content is using three dimensional computer generated graphics. While such content is suitable for being displayed, the amount of desirable such three dimensional computer generated content is limited. Another way to generate there dimensional content is using three dimensional video camera systems. Likewise, while such video camera content is suitable for being displayed, the amount of desirable such three dimensional content is likewise limited. A preferable technique to generate three dimensional content is using the vast amounts of available two dimensional content and converting the two dimensional content into three dimensional content. While such conversion of two dimensional content (2D) to three dimensional content (3D) conversation is desirable, the techniques are complicated and labor intensive.
The foregoing and other objectives, features, and advantages of the invention may be more readily understood upon consideration of the following detailed description of the invention, taken in conjunction with the accompanying drawings.
One technique to achieve two dimensional (2D) to three dimensional (3D) conversion is using a modified time difference technique. The modified time difference technique converts 2D images to 3D images by selecting images that would be a stereo-pair according to the detected motions of objects in the input sequential images. This technique may, if desired, be based upon motion vector information available in the video or otherwise determined.
Another technique to achieve two dimensional (2D) to three dimensional (3D) conversion is using a computed image depth technique. The 3D images are generated based upon the characteristics of each 2D image. The characteristics of the image that may be used, include for example, the contrast of different regions of the image, the sharpness of different regions of the image, and the chrominance of different regions of the image. The sharpness, contrast, and chrominance values of each area of the input image may be determined. The sharpness relates to the high frequency content of the luminance signal of the input image. The contrast relates to a medium frequency content of the luminance signal of the input image. The chrominance relates the hue and the tone content of the color signal of the input image. Adjacent areas that have close color may be grouped together according to their chrominance values. The image depth may be computed using these characteristics and/or other characteristics, as desired. For example, generally near positioned objects have higher sharpness and higher contrast than far positioned objects and the background image. Thus, the sharpness and contrast may be inversely proportional to the distance. These values may likewise be weighted based upon their spatial location within the image. Other techniques may likewise be used to achieve a 2D to 3D conversion of an input image, including motion compensation, if desired. Referring to
Completely automatic 2D to 3D conversion processes typically result in sub-optimal three dimensional image content and is preferably modified or otherwise controlled by a user in some manner to improve the resulting three dimensional image content. Referring to
The video content may be stored on the storage system 120, available from a network 150, or otherwise, and processed by the computing system 110. The user may use the display 100 as a user interface 160 for selecting three dimensional control parameters for the video content. The control parameters may be used to modify the 2D to 3D conversion process. The computing system may provide the 2D video content and/or control parameters 160 for the 2D to 3D conversion system 130, as described in detail later. The 2D-3D conversion system 130 then processes the 2D video content, based at least in part on the control parameters 160 provided (if any), to generate 3D video content. Preferably the 2D video is provided together with the control parameters 160 from the computing system 110 to the conversion system 130. For example, (1) the video content may be provided as a single video stream where the left and right images are contained in a single video stream, and/or (2) the video content may be provided as two separate video streams with a full video stream for the left eye's content and a full video stream for the right eye's content. The 3D video content, as a result of the conversion system 130, is rendered on the three dimensional display 140 so that the user may observe the effects of the control parameters 160 in combination with the 2D to 3D conversion 130. The user may modify the control parameters 160, such as by modifying selections on the user interface 160, for the video content until suitable 3D images are rendered on the three dimensional display 140. The resulting three dimensional content 170 from the 2D-3D conversion system 130 may be provided to the computing system 110, which may be stored in a three dimensional video format (e.g., Dolby 3D, XpanD 3D, Panavision 3D, MasterImage 3D, IMAX 3D), for subsequent rendering on a three dimensional display. The 2D-3D conversion 130 is preferably an external converter to the computing system 110. Alternatively, the 2D-3D conversion 130 may be an add-on hardware device, such as a processing device on a PCI card maintained within in the computing system 110. Alternatively, the 2D-3D conversion process may be performed by a processing device within the computing system 110, such as for example, a graphics card. Alternatively, the 2D-3D conversion process may be performed by a program running on the computer system 110. Alternatively, the 3D display 140 and the 2D display 100 may be replaced by a single 3D display. As it may be observed, the system may be used to modify the two dimensional content using control parameters in a manner suitable to improve the three dimensional representation of the image content.
It is desirable to include a suitable user interface 160 that facilitates the user to efficiently and effectively adjust the conversion of the 2D video content to the 3D video content. To achieve this conversion, it was determined that the 2D to 3D conversion for video segments of the video content that have relatively similar content tend to have sufficiently similar control parameters to achieve desirable results. A key frame detection system may be used to process the video stream to automatically identify a set of key frames within the video stream. A key frame may be representative of a series of frames of a video stream, such as a scene or clip of video stream. By way of example, the key frame detection may be based upon a histogram of the video content, where a sufficient difference between adjacent histograms of video frames, may be used to indicate a key frame. In other cases, it is desirable for the user to manually identify each of the key frames. In this manner, the video content between adjacent key frames tends to be sufficiently similar that an automatic 2D to 3D conversion coupled with control parameters tends to provide sufficient 3D image quality for a segment or clip, where the key frame is the first frame (or any suitable frame(s)) of a video segment having sufficiently similar content.
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
Another of the three dimensional controls 1210 may include a zero plane position 1260. The zero plane position 1260 may be considered to be the position of the image 1240 that is aligned with the display screen 1230. For example, a small zero plane value would tend to render most of the images 1240 to appear as being to the rear of the display 1230. For example, a large zero plane value would tend to render most of the images 1240 to appear as being in the front of the display 1230. Typically, the zero plane is selected to be at the two dimensional focus of the image content. The zero plane position 1260 may likewise be used to render a substantial portion of the image content in front of the display 1230 which provides a “pop out” of the image content.
Another of the three dimensional controls 1210 may include a segmentation break 1270 which defines a depth position 1272 in the three dimensional image space. The depth position 1272 may be considered another depth plane. Around this depth position 1272 other effects may be efficiently made on the image content. For example, the depth position 1272 may be positioned in a region behind foreground images and in front of the background image. Further, a plurality of depth positions 1272 may be selected, if desired. Further, the depth position 1272 may be a range of depths, with the effects occurring on either side of the range of depths. In this manner, a range of depths may remain unchanged with effects occurring on either side of the range of depths. In addition, the effects may extend the entire range from the depth position 1272 in one or both directions and/or may extend a limited range from the depth position 172 in one or both directions (such as to one or more “stop positions”).
Referring also to
Referring also to
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
Even with advanced 2D to 3D conversion techniques, the determination of the three dimensional image content tends to have some aspects of the image located at an undesirable depth within the image. For example, one undesirable effect may be depth spiking, usually toward the viewer, which is the result of the depth techniques trying to resolve a location when there is a significant visual feature intensity with high brightness and/or high saturation that is dramatically different than the surrounding background. For example, the three dimensional location of a table lamp that should be positioned on a table may be inappropriately located at a location behind the table. While such a misplaced locations may be readily apparent to the user, it is desirable to include tools accessible from the interface to effectively select objects in some manner so that those objects may be relocated as a more appropriate image depth.
Referring to
The artifact suppression feature may include a right eye/left eye swap feature 1910 where the image presented to the eyes are switched. This tends to be useful when the video scene is back lighted which can cause the depth map to invert to some extent. If this is the case, this swapping of the eyes is readily performed in an efficient manner and may result in a sufficiently high 3D image quality.
The artifact suppression feature may include an artifact color 1 selection 1920, which enables a first set of selectors 1930. The first selector 1930 includes an intensity selector 1940. The intensity selector 1940 may select a range of intensities within the image, from a lower value 1942 to a higher value 1944. In this manner, those pixels (or groups of pixels) within the image that contain values within the range of intensity values are selected. In addition, those pixels that are selected may be highlighted in some manner on the display so that they can be readily identified. Accordingly, if it is desirable to select a bright object, a bright intensity range may be selected that corresponds with the object of interest and adjusted until the object of interest is sufficiently discriminated from the non-objects of interest in the image.
The first selector 1930 includes a hue selector 1950. The hue selector 1950 may select a range of within the image, from a lower value 1952 to a higher value 1954. In this manner, those pixels (or groups of pixels) within the image that contain values within the range of hue values are selected. In addition, those pixels that are selected may be highlighted in some manner on the display so that they can be readily identified. Accordingly, if it is desirable to select an object with a particular range of hues, a hue range may be selected that corresponds with the object of interest and adjusted until the object of interest is sufficiently discriminated from the non-objects of interest in the image.
The first selector 1930 includes a saturation selector 1960. The saturation selector 1960 may select a range of within the image, from a lower value 1962 to a higher value 1964. In this manner, those pixels (or groups of pixels) within the image that contain values within the range of saturation values are selected. In addition, those pixels that are selected may be highlighted in some manner on the display so that they can be readily identified. Accordingly, if it is desirable to select an object with a particular range of saturation, a saturation range may be selected that corresponds with the object of interest and adjusted until the object of interest is sufficiently discriminated from the non-objects of interest in the image.
A depth offset 1965 may be used to select the offset of the selected region of the image, such as offset toward the rear or an offset toward the front. Other selectors may be used, as desired, to further discriminate aspects of the image. The combination of the selectors 1940, 1950, 1960 may be used in combination with one another to provide object discrimination.
An attention selector 1955 may be used to select the attenuation of the selection region of the image, such as an attention toward the rear or an attention toward the front. The attention in effect modifies the 2D to 3D conversion process for the identified image characteristics, such as a modification of the depth map and/or a modification of the generation process, to select how much of the selected image characteristics is changed in depth. In this manner, the effect may be subtle to move the depth of the image characteristics in a manner this is more visually pleasing than simply reassigning the absolute depth of such image characteristics.
Referring to
In many cases, it is desirable to select a region of the image which enlarges that region of the image so that an eye dropper selector may be used to select samples of that region. The samples of the region (one or more pixels) may be used to provide initial intensity 1940, hue 1950, and/or attenuation 1960 ranges. In this manner, the user may more readily distinguish suitable ranges for the desired image characteristics.
The artifact suppression feature may include an artifact color 2 selection 1970, which enables a second set of selectors 1980, which are similar to the first set of selectors. The artifact color 2 selection 1970 may be used to further refine the selection within the first set of selectors 1930 and/or may be used to select another set of image content in addition to the first set of selectors 1930. Additional artifact color selections may be included, as desired. A depth offset 1990 may be used to select the offset of the selected region of the image as a result of the artifact color 21970 selector, such as offset toward the rear or an offset toward the front. Also, an attenuation selector 1995 may likewise be used.
Other controls may be provided for the modification of the 2D to 3D conversion process, such as for example, layers, masks, brushes, curves, and/or levels.
Referring to
In another embodiment, the system may include a plurality of different tracks for the video. For example, the first track may relate to modifications of the video related to depth settings, such as for 2D to 3D conversion. For example, the second track may relate to modifications of the video related to color grading of the video content. If desired, the color gradating may be further dependent on the 2D to 3D conversion. For example, the third track may relate to modifications of the video related to color perception of the video content. If desired, the color perception may be further dependent on the 2D to 3D conversion process. For example, the fourth track may relate to modifications associated with the video content synthesizing smell. If desired, the synthesized smell may be further dependent on the 2D to 3D conversion process. For example, the fifth track may relate to modifications of the video related to a fourth dimension related to the video content, such as movement of a chair in which a viewer would be sitting in. If desired, the fourth dimension may be further dependent on the 2D to 3D conversion process.
If desired, the control codes may be in the form of an event list associated with one or more tracks. Each of the events may be associated with a particular location of the timeline, such that upon reaching that location during viewing of the video the event that is indicated is undertaken. For example, some events may include reading control parameters for the 2D to 3D conversion process, some events may include processing executable processes, some events may identify one or more keyframes, etc.
The terms and expressions which have been employed in the foregoing specification are used therein as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding equivalents of the features shown and described or portions thereof, it being recognized that the scope of the invention is defined and limited only by the claims which follow.
This application claims the benefit of U.S. Provisional App. No. 62/014,269, filed Jun. 19, 2014.
Number | Name | Date | Kind |
---|---|---|---|
4180313 | Innuiya | Dec 1979 | A |
4925294 | Geshwind | May 1990 | A |
5257345 | Malm | Oct 1993 | A |
5465175 | Woodgate et al. | Nov 1995 | A |
5650876 | Davies et al. | Jul 1997 | A |
5654810 | Okamura et al. | Aug 1997 | A |
5663831 | Mashitani et al. | Sep 1997 | A |
5731853 | Taketomi et al. | Mar 1998 | A |
5731899 | Meyers | Mar 1998 | A |
5751383 | Yamanaka | May 1998 | A |
5757545 | Wu et al. | May 1998 | A |
5771121 | Hentschke | Jun 1998 | A |
5781229 | Zediker et al. | Jul 1998 | A |
5808797 | Bloom et al. | Sep 1998 | A |
5822125 | Meyers | Oct 1998 | A |
5825552 | Kurtz et al. | Oct 1998 | A |
5831765 | Nakayama et al. | Nov 1998 | A |
5841579 | Bloom et al. | Nov 1998 | A |
5852512 | Chikazawa | Dec 1998 | A |
5855425 | Hamagishi | Jan 1999 | A |
5864375 | Taketomi et al. | Jan 1999 | A |
5894364 | Nagatani | Apr 1999 | A |
5896225 | Chikazawa | Apr 1999 | A |
5914805 | Crowley | Jun 1999 | A |
5943166 | Hoshi et al. | Aug 1999 | A |
5969850 | Harrold et al. | Oct 1999 | A |
5969872 | Ben Oren et al. | Oct 1999 | A |
5982553 | Bloom et al. | Nov 1999 | A |
5986804 | Mashitani et al. | Nov 1999 | A |
5991074 | Nose et al. | Nov 1999 | A |
5993003 | McLaughlin | Nov 1999 | A |
5993004 | Moseley et al. | Nov 1999 | A |
6014164 | Woodgate et al. | Jan 2000 | A |
6014187 | Taketomi et al. | Jan 2000 | A |
6020931 | Bilbrey et al. | Feb 2000 | A |
6040807 | Hamagishi et al. | Mar 2000 | A |
6048081 | Richardson | Apr 2000 | A |
6049352 | Allio | Apr 2000 | A |
6061083 | Aritake et al. | May 2000 | A |
6064424 | van Berkel et al. | May 2000 | A |
6088102 | Manhart | Jul 2000 | A |
6097554 | Watkins | Aug 2000 | A |
6101036 | Bloom | Aug 2000 | A |
6130770 | Bloom | Oct 2000 | A |
6151062 | Inoguchi et al. | Nov 2000 | A |
6157402 | Torgeson | Dec 2000 | A |
6188518 | Martin | Feb 2001 | B1 |
6215579 | Bloom et al. | Apr 2001 | B1 |
6215590 | Okano | Apr 2001 | B1 |
6219184 | Nagatani | Apr 2001 | B1 |
6224214 | Martin et al. | May 2001 | B1 |
6225979 | Taima | May 2001 | B1 |
6246402 | Setogawa | Jun 2001 | B1 |
6254246 | Tiao et al. | Jul 2001 | B1 |
6259450 | Chiabrera et al. | Jul 2001 | B1 |
6266106 | Murata et al. | Jul 2001 | B1 |
6266176 | Anderson et al. | Jul 2001 | B1 |
6271808 | Corbin | Aug 2001 | B1 |
6304263 | Chiabrera et al. | Oct 2001 | B1 |
6337721 | Hamagishi et al. | Jan 2002 | B1 |
6381072 | Burger | Apr 2002 | B1 |
6385882 | Conley et al. | May 2002 | B1 |
6388815 | Collins, Jr. et al. | May 2002 | B1 |
6445406 | Taniguchi et al. | Sep 2002 | B1 |
6462871 | Morishima | Oct 2002 | B1 |
6481849 | Martin et al. | Nov 2002 | B2 |
6525889 | Collins, Jr. et al. | Feb 2003 | B1 |
6533420 | Eichenlaub | Mar 2003 | B1 |
6547628 | Long | Apr 2003 | B1 |
6574047 | Hawver | Jun 2003 | B2 |
6608965 | Tobimatsu | Aug 2003 | B1 |
6674939 | Anderson et al. | Jan 2004 | B1 |
6697042 | Cohen et al. | Feb 2004 | B1 |
6700701 | Son et al. | Mar 2004 | B1 |
6707591 | Amm | Mar 2004 | B2 |
6712480 | Leung et al. | Mar 2004 | B1 |
6714173 | Shinoura | Mar 2004 | B2 |
6724951 | Anderson et al. | Apr 2004 | B1 |
6727866 | Wang et al. | Apr 2004 | B2 |
6728023 | Alioshin et al. | Apr 2004 | B1 |
6736512 | Balogh | May 2004 | B2 |
6747781 | Trisnadi | Jun 2004 | B2 |
6760140 | Argueta-Diaz et al. | Jul 2004 | B1 |
6764875 | Shook | Jul 2004 | B2 |
6766073 | Anderson | Jul 2004 | B1 |
6767751 | Hunter | Jul 2004 | B2 |
6775048 | Starkweather et al. | Aug 2004 | B1 |
6782205 | Trisnadi et al. | Aug 2004 | B2 |
6791570 | Schwerdtner et al. | Sep 2004 | B1 |
6795250 | Johnson et al. | Sep 2004 | B2 |
6800238 | Miller | Oct 2004 | B1 |
6801354 | Payne et al. | Oct 2004 | B1 |
6806997 | Dueweke et al. | Oct 2004 | B1 |
6813059 | Hunter et al. | Nov 2004 | B2 |
6822797 | Carlisle et al. | Nov 2004 | B1 |
6829077 | Maheshwari | Dec 2004 | B1 |
6829092 | Amm et al. | Dec 2004 | B2 |
6829258 | Carlisle et al. | Dec 2004 | B1 |
6877882 | Haven et al. | Apr 2005 | B1 |
7139042 | Nam et al. | Nov 2006 | B2 |
7154653 | Kean et al. | Dec 2006 | B2 |
7161614 | Yamashita et al. | Jan 2007 | B1 |
7168249 | Starkweather et al. | Jan 2007 | B2 |
7215474 | Argueta-Diaz | May 2007 | B2 |
7236238 | Durresi et al. | Jun 2007 | B1 |
7271945 | Hagood et al. | Sep 2007 | B2 |
7286280 | Whitehead et al. | Oct 2007 | B2 |
7295264 | Kim | Nov 2007 | B2 |
7298552 | Redert | Nov 2007 | B2 |
7304785 | Hagood et al. | Dec 2007 | B2 |
7304786 | Hagood et al. | Dec 2007 | B2 |
7311607 | Tedsen et al. | Dec 2007 | B2 |
7365897 | Hagood et al. | Apr 2008 | B2 |
7405852 | Brosnihan et al. | Jul 2008 | B2 |
7417782 | Hagood et al. | Aug 2008 | B2 |
7425069 | Schwerdtner et al. | Sep 2008 | B2 |
7430347 | Anderson et al. | Sep 2008 | B2 |
7432878 | Nayar et al. | Oct 2008 | B1 |
7450304 | Sakai et al. | Nov 2008 | B2 |
7502159 | Hagood, IV et al. | Mar 2009 | B2 |
7518663 | Cornelissen | Apr 2009 | B2 |
7551344 | Hagood et al. | Jun 2009 | B2 |
7551353 | Kim et al. | Jun 2009 | B2 |
7614748 | Nayar et al. | Nov 2009 | B2 |
7616368 | Hagood, IV | Nov 2009 | B2 |
7619806 | Hagood, IV et al. | Nov 2009 | B2 |
7630598 | Anderson et al. | Dec 2009 | B2 |
7633670 | Anderson et al. | Dec 2009 | B2 |
7636189 | Hagood, IV et al. | Dec 2009 | B2 |
7651282 | Zomet et al. | Jan 2010 | B2 |
7660499 | Anderson et al. | Feb 2010 | B2 |
7675665 | Hagood et al. | Mar 2010 | B2 |
7703924 | Nayar | Apr 2010 | B2 |
7742016 | Hagood et al. | Jun 2010 | B2 |
7746529 | Hagood et al. | Jun 2010 | B2 |
7750982 | Nelson et al. | Jul 2010 | B2 |
7755582 | Hagood et al. | Jul 2010 | B2 |
7817045 | Onderko | Oct 2010 | B2 |
7839356 | Hagood et al. | Nov 2010 | B2 |
7852546 | Fijol et al. | Dec 2010 | B2 |
7857700 | Wilder et al. | Dec 2010 | B2 |
7864419 | Cossairt et al. | Jan 2011 | B2 |
7876489 | Gandhi et al. | Jan 2011 | B2 |
7889425 | Connor | Feb 2011 | B1 |
7891815 | Nayar et al. | Feb 2011 | B2 |
7911671 | Rabb | Mar 2011 | B2 |
7927654 | Hagood et al. | Apr 2011 | B2 |
7978407 | Connor | Jul 2011 | B1 |
8134779 | Roh et al. | Mar 2012 | B2 |
8149348 | Yun et al. | Apr 2012 | B2 |
8159428 | Hagood et al. | Apr 2012 | B2 |
8174632 | Kim et al. | May 2012 | B2 |
8179424 | Moller | May 2012 | B2 |
8189039 | Hiddink et al. | May 2012 | B2 |
8242974 | Yamazaki et al. | Aug 2012 | B2 |
8248560 | Kim et al. | Aug 2012 | B2 |
8262274 | Kim et al. | Sep 2012 | B2 |
8310442 | Hagood et al. | Nov 2012 | B2 |
8363100 | Lu | Jan 2013 | B2 |
8402502 | Meuninck et al. | Mar 2013 | B2 |
8441602 | Kim et al. | May 2013 | B2 |
8446559 | Kim et al. | May 2013 | B2 |
8482496 | Lewis | Jul 2013 | B2 |
8519923 | Hagood, IV et al. | Aug 2013 | B2 |
8519945 | Hagood et al. | Aug 2013 | B2 |
8520285 | Fike, III et al. | Aug 2013 | B2 |
8526096 | Steyn et al. | Sep 2013 | B2 |
8545048 | Kang et al. | Oct 2013 | B2 |
8545084 | Kim et al. | Oct 2013 | B2 |
8558961 | Yun et al. | Oct 2013 | B2 |
8587498 | Connor | Nov 2013 | B2 |
8587635 | Hines et al. | Nov 2013 | B2 |
8593574 | Ansari et al. | Nov 2013 | B2 |
8599463 | Wu et al. | Dec 2013 | B2 |
8640182 | Bedingfield, Sr. | Jan 2014 | B2 |
8651684 | Mehrle | Feb 2014 | B2 |
8651726 | Robinson | Feb 2014 | B2 |
8659830 | Brott et al. | Feb 2014 | B2 |
8675125 | Cossairt et al. | Mar 2014 | B2 |
8711062 | Yamazaki et al. | Apr 2014 | B2 |
8736675 | Holzbach et al. | May 2014 | B1 |
8786685 | Sethna et al. | Jul 2014 | B1 |
8817082 | Van Der Horst et al. | Aug 2014 | B2 |
8860790 | Ericson et al. | Oct 2014 | B2 |
8891152 | Fike, III et al. | Nov 2014 | B2 |
8897542 | Wei | Nov 2014 | B2 |
8917441 | Woodgate et al. | Dec 2014 | B2 |
8918831 | Meuninck et al. | Dec 2014 | B2 |
8937767 | Chang et al. | Jan 2015 | B2 |
8947385 | Ma et al. | Feb 2015 | B2 |
8947497 | Hines et al. | Feb 2015 | B2 |
8947511 | Friedman | Feb 2015 | B2 |
8964009 | Yoshida | Feb 2015 | B2 |
8988343 | Fei et al. | Mar 2015 | B2 |
8994716 | Malik | Mar 2015 | B2 |
9001423 | Woodgate et al. | Apr 2015 | B2 |
9024927 | Koyama | May 2015 | B2 |
9030522 | Hines et al. | May 2015 | B2 |
9030536 | King et al. | May 2015 | B2 |
9032470 | Meuninck et al. | May 2015 | B2 |
9049426 | Costa et al. | Jun 2015 | B2 |
9082353 | Lewis et al. | Jul 2015 | B2 |
9086778 | Friedman | Jul 2015 | B2 |
9087486 | Gandhi et al. | Jul 2015 | B2 |
9116344 | Wu et al. | Aug 2015 | B2 |
9128277 | Steyn et al. | Sep 2015 | B2 |
9134552 | Ni Chleirigh et al. | Sep 2015 | B2 |
9135868 | Hagood, IV et al. | Sep 2015 | B2 |
9158106 | Hagood et al. | Oct 2015 | B2 |
9160968 | Hines et al. | Oct 2015 | B2 |
9167205 | Hines et al. | Oct 2015 | B2 |
9176318 | Hagood et al. | Nov 2015 | B2 |
9177523 | Hagood et al. | Nov 2015 | B2 |
9182587 | Brosnihan et al. | Nov 2015 | B2 |
9182604 | Cossairt et al. | Nov 2015 | B2 |
9188731 | Woodgate et al. | Nov 2015 | B2 |
9229222 | Hagood et al. | Jan 2016 | B2 |
9232274 | Meuninck et al. | Jan 2016 | B2 |
9235057 | Robinson et al. | Jan 2016 | B2 |
9237337 | Ramsey et al. | Jan 2016 | B2 |
9243774 | Kim et al. | Jan 2016 | B2 |
9247228 | Malik | Jan 2016 | B2 |
9250448 | Robinson | Feb 2016 | B2 |
9261641 | Sykora et al. | Feb 2016 | B2 |
9261694 | Payne et al. | Feb 2016 | B2 |
20020008704 | Sheasby | Jan 2002 | A1 |
20030067421 | Sullivan | Apr 2003 | A1 |
20030128205 | Varghese | Jul 2003 | A1 |
20030197933 | Sudo et al. | Oct 2003 | A1 |
20040165264 | Uehara et al. | Aug 2004 | A1 |
20040174604 | Brown | Sep 2004 | A1 |
20040192430 | Burak et al. | Sep 2004 | A1 |
20050059487 | Wilder et al. | Mar 2005 | A1 |
20050083400 | Hirayama et al. | Apr 2005 | A1 |
20050111100 | Mather et al. | May 2005 | A1 |
20050190443 | Nam et al. | Sep 2005 | A1 |
20060023065 | Alden | Feb 2006 | A1 |
20060039181 | Yang et al. | Feb 2006 | A1 |
20060044987 | Anderson et al. | Mar 2006 | A1 |
20060078180 | Berretty | Apr 2006 | A1 |
20060244918 | Cossairt et al. | Nov 2006 | A1 |
20060262122 | Hoddie | Nov 2006 | A1 |
20070146358 | Ijzerman | Jun 2007 | A1 |
20070165305 | Mehrle | Jul 2007 | A1 |
20070222954 | Hattori | Sep 2007 | A1 |
20070229778 | Cha et al. | Oct 2007 | A1 |
20070255139 | Deschinger et al. | Nov 2007 | A1 |
20070268590 | Schwerdtner | Nov 2007 | A1 |
20080079662 | Saishu et al. | Apr 2008 | A1 |
20080094853 | Kim et al. | Apr 2008 | A1 |
20080123182 | Cernasov | May 2008 | A1 |
20080204550 | De Zwart et al. | Aug 2008 | A1 |
20080211734 | Huitema et al. | Sep 2008 | A1 |
20080225114 | De Zwart et al. | Sep 2008 | A1 |
20080247042 | Scwerdtner | Oct 2008 | A1 |
20080259233 | Krijn et al. | Oct 2008 | A1 |
20080281767 | Garner | Nov 2008 | A1 |
20080291267 | Leveco et al. | Nov 2008 | A1 |
20080316303 | Chiu et al. | Dec 2008 | A1 |
20080316604 | Redert et al. | Dec 2008 | A1 |
20090002335 | Chaudhri | Jan 2009 | A1 |
20090051759 | Adkins et al. | Feb 2009 | A1 |
20090217209 | Chen et al. | Aug 2009 | A1 |
20090237564 | Kikinis | Sep 2009 | A1 |
20090309887 | Moller et al. | Dec 2009 | A1 |
20090309958 | Hamagishi et al. | Dec 2009 | A1 |
20100007582 | Zalewski | Jan 2010 | A1 |
20100026795 | Moller et al. | Feb 2010 | A1 |
20100026797 | Meuwissen et al. | Feb 2010 | A1 |
20100033813 | Rogoff | Feb 2010 | A1 |
20100050080 | Libert | Feb 2010 | A1 |
20100080448 | Tam | Apr 2010 | A1 |
20100097687 | Lipovetskaya et al. | Apr 2010 | A1 |
20100110316 | Huang et al. | May 2010 | A1 |
20100165081 | Jung et al. | Jul 2010 | A1 |
20100188572 | Card, II | Jul 2010 | A1 |
20100245548 | Sasaki et al. | Sep 2010 | A1 |
20100302237 | Aramaki | Dec 2010 | A1 |
20100309287 | Rodriguez | Dec 2010 | A1 |
20100309290 | Myers | Dec 2010 | A1 |
20110000971 | Onderko | Jan 2011 | A1 |
20110012897 | Stanton | Jan 2011 | A1 |
20110013258 | Lee et al. | Jan 2011 | A1 |
20110032483 | Hruska et al. | Feb 2011 | A1 |
20110074770 | Robinson | Mar 2011 | A1 |
20110074773 | Jung | Mar 2011 | A1 |
20110085094 | Kao et al. | Apr 2011 | A1 |
20110109629 | Ericson et al. | May 2011 | A1 |
20110134109 | Izumi | Jun 2011 | A1 |
20110149030 | Kang et al. | Jun 2011 | A1 |
20110188773 | Wei et al. | Aug 2011 | A1 |
20110210964 | Chiu et al. | Sep 2011 | A1 |
20110234605 | Smith et al. | Sep 2011 | A1 |
20110246877 | Kwak et al. | Oct 2011 | A1 |
20110249026 | Singh | Oct 2011 | A1 |
20110254929 | Yang et al. | Oct 2011 | A1 |
20110291945 | Ewing, Jr. et al. | Dec 2011 | A1 |
20110316679 | Pihlaja | Dec 2011 | A1 |
20120013606 | Tsai et al. | Jan 2012 | A1 |
20120019883 | Chae et al. | Jan 2012 | A1 |
20120026586 | Chen | Feb 2012 | A1 |
20120038745 | Yu | Feb 2012 | A1 |
20120050262 | Kim et al. | Mar 2012 | A1 |
20120057006 | Joseph et al. | Mar 2012 | A1 |
20120057229 | Kikuchi et al. | Mar 2012 | A1 |
20120062549 | Woo et al. | Mar 2012 | A1 |
20120069019 | Richards | Mar 2012 | A1 |
20120069146 | Lee et al. | Mar 2012 | A1 |
20120081359 | Lee et al. | Apr 2012 | A1 |
20120102436 | Nurmi | Apr 2012 | A1 |
20120113018 | Yan | May 2012 | A1 |
20120154559 | Voss et al. | Jun 2012 | A1 |
20120194506 | Passmore | Aug 2012 | A1 |
20120202187 | Brinkerhoff, III | Aug 2012 | A1 |
20120206484 | Hauschild et al. | Aug 2012 | A1 |
20120223879 | Winter | Sep 2012 | A1 |
20120229450 | Kim et al. | Sep 2012 | A1 |
20120229718 | Huang et al. | Sep 2012 | A1 |
20120249534 | Kanagawa | Oct 2012 | A1 |
20120249836 | Ali et al. | Oct 2012 | A1 |
20120262398 | Kim et al. | Oct 2012 | A1 |
20120274626 | Hsieh | Nov 2012 | A1 |
20120274634 | Yamada | Nov 2012 | A1 |
20130027390 | Kim et al. | Jan 2013 | A1 |
20130038611 | Noritake et al. | Feb 2013 | A1 |
20130057647 | Moon | Mar 2013 | A1 |
20130127846 | Isogai | May 2013 | A1 |
20130127989 | Chen | May 2013 | A1 |
20130202221 | Tsai et al. | Aug 2013 | A1 |
20140035902 | An et al. | Feb 2014 | A1 |
20140036173 | Chang | Feb 2014 | A1 |
20140192172 | Kang et al. | Jul 2014 | A1 |
20140223482 | McIntosh | Aug 2014 | A1 |
20140267243 | Venkataraman | Sep 2014 | A1 |
20140355302 | Wilcox et al. | Dec 2014 | A1 |
20150070481 | S. et al. | Mar 2015 | A1 |
20150113400 | Andrianakou | Apr 2015 | A1 |
20150116457 | Barkatullah | Apr 2015 | A1 |
20150116458 | Barkatullah | Apr 2015 | A1 |
20150185957 | Weng et al. | Jul 2015 | A1 |
20150226972 | Wang | Aug 2015 | A1 |
20150260999 | Wang et al. | Sep 2015 | A1 |
20150279083 | Pradeep | Oct 2015 | A1 |
20150341616 | Siegel et al. | Nov 2015 | A1 |
Number | Date | Country |
---|---|---|
2012068532 | May 2012 | WO |
2013109252 | Jul 2013 | WO |
2015026017 | Feb 2015 | WO |
Entry |
---|
http://brorsoft.com/how-to/import-mp4-to-final-cut-pro.html, captured Aug. 17 2013 by Archive.org. |
International Search Report and Written Opinion, PCT International Patent Application No. PCT/US2016/061313, Craig Peterson, dated Jan. 19, 2017, 22 pages. |
Number | Date | Country | |
---|---|---|---|
20150371450 A1 | Dec 2015 | US |
Number | Date | Country | |
---|---|---|---|
62014269 | Jun 2014 | US |