Method and apparatus for generating enhanced 3D-effects for real-time and offline applications

Information

  • Patent Grant
  • 10250864
  • Patent Number
    10,250,864
  • Date Filed
    Wednesday, December 27, 2017
    6 years ago
  • Date Issued
    Tuesday, April 2, 2019
    5 years ago
  • Inventors
    • Barkatullah; Javed Sabir (Portland, OR, US)
  • Original Assignees
  • Examiners
    • Vaughn, Jr.; William C
    • Jean Baptiste; Jerry T
    Agents
    • Chernoff, Vilhauer, McClung & Stenzel, LLP
Abstract
A method for adjusting and generating enhanced 3D-effects for 2D to 3D image and video conversion applications includes controlling a depth location of a zero parallax plane within a depth field of an image scene to adjust parallax of objects in the image scene, controlling a depth volume of objects in the image scene to one of either exaggerate or reduce 3D-effect of the image scene, controlling a depth location of a segmentation plane within the depth field of the image scene, dividing the objects in the image scene into a foreground group and a background group, selectively increasing or decreasing depth volume of objects in the foreground group, selectively increasing or decreasing depth separation of objects in the foreground group relative to the objects in the background group, and generating an updated depth map file for a 2D-image.
Description
BACKGROUND OF THE INVENTION

Embodiments here relate generally to the field of 2D to 3D video and image conversion performed either in real time or offline. More particularly, the embodiments relate to a method and apparatus for enhancing and/or exaggerating depth and negative parallax and adjusting the zero-parallax plane, also referred to as the screen plane, for 3D-image rendering on different 3D display technologies and formats.


With the rising sale of 3D-enabled TVs and personal devices in the consumer segment, the need to release new and old movies in 3D is increasing. In the commercial application space, the use of large screen electronic billboards which can display attention grabbing 3D-images for advertising or informational purposes has increased. Because of the increasing demand for creating 3D-content, the demand for automatically or semi-automatically convert existing 2D-contents to 3D contents increases. Enhancing the 3D-experience of the consumers and viewers can produce further growth of 3D entertainment and advertisement market. A demand exists for tools and services to generate stunning 3D-image effects.


Traditionally, converting 2D videos to 3D for professional application starts with generating a depth map of the image for each video frame using a very labor intensive manual process of roto-scoping, where objects in each frame are manually and painstakingly traced by the artist and depth information for each object is painted by hand. For consumer applications such as built-in automated 2D to 3D function in 3D-TV or game consoles, the converted 3D-image suffers from extremely poor depth and pop-out effects. Moreover, there is no automated control to modify the zero-parallax plane position and artificially exaggerate pop-out or depth of selective objects for enhanced special-effects.


Numerous research publications exist on methods of automatically generating depth map from a mono-ocular 2D-image for the purpose of converting the 2D-image to 3D-image. The methods range from very simplistic heuristics to very complicated and compute intensive image analysis. Simple heuristics may be suitable for real time conversion application but provides poor 3D quality. On the other hand, complex mathematical analysis may provide good 3D-image quality but may not be suitable for real time application and hardware implementation.


A greyscale image represents the depth map of an image in which each pixel is assigned a value between and including 0 and 255. A value of 255 (100% white level) indicates the pixel is in the front most and a value of 0 represents the pixel is in the back most. The depth value of a pixel is used to calculate the horizontal (x-axis) offset of the pixel for left and right eye view images. In particular, if the calculated offset is w for pixel at position (x,y) in the original image, then this pixel is placed at position (x+w, y) in the left image and (x−w, y) in the right image. If the value of the offset w for a pixel is positive, it creates a negative parallax where the pixel appears to pop out of the screen. Alternatively, if the value of the offset w for a pixel is negative, it creates a positive parallax where the pixel appears to be behind the screen plane. If the offset w is zero, the pixel appears on the screen plane. The larger the offset, the greater the disparity between the left and right eye view and hence larger the depth inside the screen or pop out of the screen. Hence, given a depth map for a 2D, or monocular, image, by selectively manipulating the offsets the pixels for 3D rendering, it is possible to artificially enhance or exaggerate 3D effects in a scene and this transformations can be done in real time or offline.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows an exemplary block diagram of the system, according to one embodiment of the invention.



FIG. 2 illustrates an exemplary transformation from a depth value for a pixel in the 2D image to calculate its offset for placement in the left and right eye view images.



FIG. 3 illustrates with four settings of an exemplary graphical user interface (GUI) where user can move the location of the screen plane (also known as zero plane) in the scene, according to one software embodiment of the invention.



FIG. 4 illustrates a graphical user interface (GUI) for user to control depth volume, according to one embodiment of the invention.



FIG. 5 illustrates an exemplary method for exaggerating depth by adding a step offset for all depths equal or greater than a user defined value, according to one embodiment of the invention. In another embodiment, the slope of the depth to offset function is modified to exaggerate the 3D-effect.



FIG. 6 illustrates an exemplary method for exaggerating depth by adding a step offset and scaling the slope of the depth to offset function for all depths equal or greater than a user defined value, according to one embodiment of the invention.



FIG. 7 illustrates yet another exemplary method for exaggerating depth by using an exponential transfer function for depth to offset, according to one embodiment of the invention.



FIG. 8 illustrates an exemplary flow chart for rendering exaggerated 3D image, given a 2D image source and its depth map, according to one embodiment of the invention.





DETAILED DESCRIPTION

Embodiments here relate to a method, apparatus, system, and computer program for modifying, enhancing or exaggerating 3D-image rendered given a mono-ocular (2D) image source and its depth map. In an interactive mode, user can control and change the attributes and quality for 3D-rendition of a 2D-image using graphical user interface (GUI). Optionally, such control settings can be presented to the 3D-render engine as commands stored in a file and read by 3D-rendering application or routine. These attributes and quality of the 3D image are not specific to a particular 3D-format but can be used for all 3D formats including but not limited to various stereo-3D formats and glasses free multi-view auto-stereo formats. The embodiments can take advantage of the computing power of general purpose CPU, GPU or dedicated FPGA or ASIC chip to process sequence of images from video frames of a streaming 2D-video to generate 3D video frames. Depending on the available processing capabilities of the processing unit and complexity of desired transformations, the conversion of 2D video frames to 3D can be done in real.


In one embodiment, the enhanced 3D-experience may be implemented as a software application running on a computing device such as a personal computer, tablet computer or smart-phone. A user receives a streaming 2D-video from the internet or from a file stored on a local storage device. The user then uses the application GUI to adjust the quality and attributes of 3D-video in an automatic 2D video to 3D conversion and display it on the attached 3D display in real time. In one embodiment, the converted enhanced 3D-video can be stored back on the local or network storage device.


In one embodiment, the 2D to 3D conversion process is implemented as a software application running on a computing device such as a personal computer, tablet computer or smart-phone. A user loads a video from a file stored on a local or network attached storage device and uses the application to automatically or in an interactive mode convert the 2D video to 3D and store it back offline on the local or network attached disk. In one embodiment, the user settings for 3D attributes can be stored in a file using some pre-defined syntax such as XML and can be read in by the 2D to 3D conversion application and applied during the rendering of the 3D-video.


In one embodiment, the enhanced 3D render method is implemented in dedicated hardware such as an FPGA or a custom ASIC chip as an independent 3D-render application. In one embodiment, the enhanced 3D render method is implemented in dedicated hardware such as an FPGA or a custom ASIC chip as part of a larger 2D to 3D conversion application. In one embodiment, the enhanced 3D-render video conversion system is implemented as a stand-alone converter box. In one embodiment, the entire 2D to 3D video conversion system is implemented a circuit board or a daughter card. In one embodiment, a stand-alone implantation of the conversion system can be attached to the output of a streaming video receiver, broadcast TV receiver, satellite-TV receiver or cable-TV receiver and the output of standalone converter box can be connected to 3D-displays.


In one embodiment, the enhanced 3D render method is implemented as a software application utilizing the graphics processing unit (GPU) of a computing device such as a personal computer, tablet computer or smart-phone to enhance performance.


In one embodiment, the system receives a 2D image and its depth map either as separately but synchronized fashion or together in a single frame, usually referred to as 2D+D format, and the software or hardware implementation of the enhanced 3D-render method uses that to produce the enhanced 3D-image.



FIG. 1 shows an exemplary block diagram of a 2D to 3D conversion process, according to one embodiment. In one embodiment, the process comprises receiving single or a sequence of image frames. The depth map generator block 102 generates the depth map 112 from the 2D-source image. In one embodiment, the depth map 112 is used by the enhanced 3D-render block 106 that generates a transformed depth map to calculate new pixel displacements by the render engine.



FIG. 2 illustrates one embodiment of transformation from pixel depth to pixel offset in 3D-image. Lines 101 and 102 are the linear transformation from depth to offset for the right and left eye view images. 103 represents a plane in the depth field where both the left and right eye view offsets are equal and zero. All objects with depths and hence offsets to the right of this plane will have negative parallax, meaning the object will appear to pop out of the screen. All objects with depths and hence offsets to the left of this plane will have positive parallax, meaning the object will appear to be behind the screen.



FIG. 3 illustrates one embodiment of graphical user interface (GUI) 202 to enable the user to adjust the location of the zero plane, which is the point in the graph 201 where the two lines meet The GUI 202 shows offset of the zero plane to be zero. Different situations of this GUI are shown with different adjustments represented by the lines above them. GUI 204 shows an offset in which the zero plane position is 127 on the GUI and the graphical representation is shown as 203. Similarly, GUI 206 shows the offset of 170, with the zero plane moving to the right as shown as 205, and GUI 208 shows the offset of 255, with the zero plane to the farthest right position.



FIG. 4 illustrate one embodiment of graphical user interface (GUI) 302 to enable user adjust the amount of depth in the 3D-image by adjusting the amount of disparity produced between the left and right eye view. GUI 304 sows a lower value for disparity. As shown by comparing 301 and 303, the lower values result in less depth.



FIG. 5 illustrates two embodiments of graphical user interface (GUI) consisting of controls 402, 404 and 406 that enable user to artificially separate objects selectively from background objects and pop it out. A step offset value 403 is used in one embodiment. A scaled slope 405 is used in another embodiment. The depth location where the offset or slope scaling is indicated by 401 and is controlled by the GUI control 402.



FIG. 6 illustrates one embodiment of graphical user interface (GUI) where both step and slope scale is applied simultaneously. The GUI 502 with the values shows results in the representation shown as 503.



FIG. 7 illustrates one embodiment where the depth to offset transformation is exponential. This creates an effect where all the background objects are squished flat, while the objects in the foreground have increasingly exaggerated depth and/or pop-out. In general, the exponential function can be replaced by any nonlinear, monotonic function to create special 3D-effects.



FIG. 8 illustrates one embodiment of a flowchart for enhanced 3D-render method. At 800, the process obtains the control data needed for the further processing. This data may include maximum disparity, zero plane position, and the segmentation type, amount and location. At 802, the process calculates the offset for the right and left eye views using the pixel depth from the depth map and the control data. At 804, the process renders the right and left eye view using the offsets for each pixel.


Reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments. The various appearances of “an embodiment,” “one embodiment,” or “some embodiments” are not necessarily all referring to the same embodiments. If the specification states a component, feature, structure, or characteristic “may,” “might,” or “could” be included, that particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to “a” or “an” element, that does not mean there is only one of the element. If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional element.


While the invention has been described in conjunction with specific embodiments thereof, many alternatives, modifications and variations of such embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description.

Claims
  • 1. A method for adjusting and generating enhanced 3D-effects for real time and offline 2D to 3D image and video conversion applications consisting of: (a) selectively controlling a depth location of a zero parallax plane within a depth field of an image scene to adjust parallax of objects in the image scene;(b) selectively controlling a depth volume of objects in the image scene to one of either exaggerate or reduce 3D-effect of the image scene;(c) selectively controlling a depth location of a segmentation plane within the depth field of the image scene, wherein said depth location is a non-zero depth location, dividing the objects in the image scene into a foreground group and a background group based on a location of the objects relative to the segmentation plane wherein an object of said foreground group is in said background group when said depth location of said segmentation plane is moved forward and wherein an object of said background group is in said foreground group when said depth location of said segmentation plane is moved backward, wherein said segmentation plane is moved from a zero location to a different location where as a result of moving said segmentation plane to said different location at least one of, (i) objects that were in said foreground group when said segmentation plane was at said zero location are moved to said background group when said segmentation plane is moved to said different location, and;(ii) objects that were in said background group when said segmentation plane was at said zero location are moved to said foreground group when said segmentation plane is moved to said different location;(d) selectively increasing or decreasing depth volume of objects in the foreground group independently of selectively increasing or decreasing depth volume of objects in the background group, wherein said depth volume of objects in said foreground group is modified to change available foreground volume in which objects to be rendered are mapped, wherein said depth volume of objects in said background group is modified to change available background volume in which objects to be rendered are mapped, (i) wherein objects that were in said foreground group when said segmentation plane was at said zero location that are moved to said background group when said segmentation plane is moved to said different location are said selectively increased or decreased in said depth volume as objects in said background group, and(ii) wherein objects that were in said background group when said segmentation plane was at said zero location that are moved to said foreground group when said segmentation plane is moved to said different location are said selectively increased or decreased in said depth volume as objects in said foreground group;(e) selectively increasing or decreasing depth separation of objects in the foreground group relative to the objects in the background group, where said separation includes both a step offset and a slope scaling, wherein said step offset and said slope scaling is relative to said available foreground volume being fixed, wherein said step offset and said slope scaling is relative to said available background volume being fixed, wherein objects in said foreground group and said background group include a continuous range of available depths prior to said selectively increasing or decreasing said depth separation of objects in said foreground group relative to said objects in said background group and wherein objects in said foreground group and said background group include a discontinuous range of available depths after said selectively increasing or decreasing said depth separation of objects in said foreground group relative to said objects in said background group, wherein said discontinuous range includes a prohibited range of depths within said continuous range of available depths being said step offset;(f) generating an updated depth map file for a 2D-image based upon the controlling the depth location, the controlling the depth volume, the increasing and decreasing depth volume, and the increasing and decreasing depth separation;(g) rendering an enhanced 3D-image using the updated depth map.
  • 2. The method of claim 1, wherein the method further comprises a software application running on a computing device.
  • 3. The method of claim 2, wherein the computing device comprises one of a server computer, personal computer, tablet computer or smart-phone, graphics processor unit.
  • 4. The method of claim 1, further comprising receiving a 2D-still image or a streaming 2D-video from a network with an associated depth map.
  • 5. The method of claim 1, further comprising reading a 2D-still image or a 2D-video from a file stored on a local or remote storage device with the associated depth map image.
  • 6. The method of claim 1, further comprising generating a depth map for each 2D-still image or a sequence of depth maps for each frame in a 2D-video.
  • 7. The method of claim 1, further comprising reading meta-instructions for depth map enhancement for the 2D-image or video from a file stored on a local or remote storage device.
  • 8. The method of claim 1, further comprising enabling a user to enhance the depth map through one of a set of graphical user interfaces (GUI), command line instructions, and custom input devices.
  • 9. The method of claim 1, wherein rendering a 3D image comprises one of rendering an anaglyph, stereo-3D or auto-stereo 3D using the enhanced depth map.
  • 10. The method of claim 1, further comprising one of displaying generated 3D image or video on and attached 3D display in real time, and storing the 3D image on local or remote storage device(s) for offline viewing.
  • 11. The method of claim 1, further comprising storing the generated enhanced depth map as grey scale images on a storage device.
  • 12. The method of claim 1, further comprising storing user modifications of the depth map as a sequence of instructions associated with each image in a control file using a pre-defined syntax.
  • 13. The method of claim 1, wherein the method is executed by a dedicated hardware device.
  • 14. The method of claim 1, wherein the method is executed by hardware contained in a stand-alone converter box.
  • 15. The method of claim 1, wherein the method is implemented as one of a circuit board, a daughter card or any other plug-in card or module.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 14/522,278, filed Oct. 23, 2014, which application claims the benefit of U.S. Provisional App. No. 61/897,787, filed Oct. 30, 2013.

US Referenced Citations (322)
Number Name Date Kind
4180313 Innuiya Dec 1979 A
5465175 Woodgate et al. Nov 1995 A
5650876 Davies et al. Jul 1997 A
5654810 Okamura et al. Aug 1997 A
5663831 Mashitani et al. Sep 1997 A
5731853 Taketomi et al. Mar 1998 A
5731899 Meyers Mar 1998 A
5751383 Yamanaka May 1998 A
5757545 Wu et al. May 1998 A
5771121 Hentschke Jun 1998 A
5781229 Zediker et al. Jul 1998 A
5808797 Bloom et al. Sep 1998 A
5822125 Meyers Oct 1998 A
5825552 Kurtz et al. Oct 1998 A
5831765 Nakayama et al. Nov 1998 A
5841579 Bloom et al. Nov 1998 A
5852512 Chikazawa Dec 1998 A
5855425 Hamagishi Jan 1999 A
5864375 Taketomi et al. Jan 1999 A
5894364 Nagatani Apr 1999 A
5896225 Chikazawa Apr 1999 A
5914805 Crowley Jun 1999 A
5943166 Hoshi et al. Aug 1999 A
5969850 Harrold et al. Oct 1999 A
5969872 Ben Oren et al. Oct 1999 A
5982553 Bloom et al. Nov 1999 A
5986804 Mashitani et al. Nov 1999 A
5991074 Nose et al. Nov 1999 A
5993003 McLaughlin Nov 1999 A
5993004 Moseley et al. Nov 1999 A
6014164 Woodgate et al. Jan 2000 A
6014187 Taketomi et al. Jan 2000 A
6020931 Bilbrey et al. Feb 2000 A
6040807 Hamagishi et al. Mar 2000 A
6048081 Richardson Apr 2000 A
6049352 Allio Apr 2000 A
6061083 Aritake et al. May 2000 A
6064424 van Berkel et al. May 2000 A
6088102 Manhart Jul 2000 A
6097554 Watkins Aug 2000 A
6101036 Bloom Aug 2000 A
6130770 Bloom Oct 2000 A
6151062 Inoguchi et al. Nov 2000 A
6157402 Torgeson Dec 2000 A
6188518 Martin Feb 2001 B1
6215579 Bloom et al. Apr 2001 B1
6215590 Okano Apr 2001 B1
6219184 Nagatani Apr 2001 B1
6224214 Martin et al. May 2001 B1
6254246 Tiao et al. Jul 2001 B1
6259450 Chiabrera et al. Jul 2001 B1
6266106 Murata et al. Jul 2001 B1
6266176 Anderson et al. Jul 2001 B1
6271808 Corbin Aug 2001 B1
6304263 Chiabrera et al. Oct 2001 B1
6337721 Hamagishi et al. Jan 2002 B1
6381072 Burger Apr 2002 B1
6385882 Conley et al. May 2002 B1
6388815 Collins, Jr. et al. May 2002 B1
6445406 Taniguchi et al. Sep 2002 B1
6462871 Morishima Oct 2002 B1
6481849 Martin et al. Nov 2002 B2
6525889 Collins, Jr. et al. Feb 2003 B1
6533420 Eichenlaub Mar 2003 B1
6547628 Long Apr 2003 B1
6574047 Hawver Jun 2003 B2
6674939 Anderson et al. Jan 2004 B1
6697042 Cohen et al. Feb 2004 B1
6700701 Son et al. Mar 2004 B1
6707591 Amm Mar 2004 B2
6712480 Leung et al. Mar 2004 B1
6714173 Shinoura Mar 2004 B2
6724951 Anderson et al. Apr 2004 B1
6727866 Wang et al. Apr 2004 B2
6728023 Alioshin et al. Apr 2004 B1
6736512 Balogh May 2004 B2
6747781 Trisnadi Jun 2004 B2
6760140 Argueta-Diaz et al. Jul 2004 B1
6764875 Shook Jul 2004 B2
6766073 Anderson Jul 2004 B1
6767751 Hunter Jul 2004 B2
6775048 Starkweather et al. Aug 2004 B1
6782205 Trisnadi et al. Aug 2004 B2
6791570 Schwerdtner et al. Sep 2004 B1
6795250 Johnson et al. Sep 2004 B2
6800238 Miller Oct 2004 B1
6801354 Payne et al. Oct 2004 B1
6806997 Dueweke et al. Oct 2004 B1
6813059 Hunter et al. Nov 2004 B2
6822797 Carlisle et al. Nov 2004 B1
6829077 Maheshwari Dec 2004 B1
6829092 Amm et al. Dec 2004 B2
6829258 Carlisle et al. Dec 2004 B1
6877882 Haven et al. Apr 2005 B1
7139042 Nam et al. Nov 2006 B2
7154653 Kean et al. Dec 2006 B2
7161614 Yamashita et al. Jan 2007 B1
7168249 Starkweather et al. Jan 2007 B2
7215474 Argueta-Diaz May 2007 B2
7236238 Durresi et al. Jun 2007 B1
7271945 Hagood et al. Sep 2007 B2
7286280 Whitehead et al. Oct 2007 B2
7295264 Kim Nov 2007 B2
7298552 Redert Nov 2007 B2
7304785 Hagood et al. Dec 2007 B2
7304786 Hagood et al. Dec 2007 B2
7311607 Tedsen et al. Dec 2007 B2
7365897 Hagood et al. Apr 2008 B2
7405852 Brosnihan et al. Jul 2008 B2
7417782 Hagood et al. Aug 2008 B2
7425069 Schwerdtner et al. Sep 2008 B2
7430347 Anderson et al. Sep 2008 B2
7432878 Nayar et al. Oct 2008 B1
7450304 Sakai et al. Nov 2008 B2
7502159 Hagood, IV et al. Mar 2009 B2
7518663 Cornelissen Apr 2009 B2
7551344 Hagood et al. Jun 2009 B2
7551353 Kim et al. Jun 2009 B2
7614748 Nayar et al. Nov 2009 B2
7616368 Hagood, IV Nov 2009 B2
7619806 Hagood, IV et al. Nov 2009 B2
7630598 Anderson et al. Dec 2009 B2
7633670 Anderson et al. Dec 2009 B2
7636189 Hagood, IV et al. Dec 2009 B2
7651282 Zomet et al. Jan 2010 B2
7660499 Anderson et al. Feb 2010 B2
7675665 Hagood et al. Mar 2010 B2
7703924 Nayar Apr 2010 B2
7742016 Hagood et al. Jun 2010 B2
7746529 Hagood et al. Jun 2010 B2
7750982 Nelson et al. Jul 2010 B2
7755582 Hagood et al. Jul 2010 B2
7817045 Onderko Oct 2010 B2
7839356 Hagood et al. Nov 2010 B2
7852546 Fijol et al. Dec 2010 B2
7857700 Wilder et al. Dec 2010 B2
7864419 Cossairt et al. Jan 2011 B2
7876489 Gandhi et al. Jan 2011 B2
7889425 Connor Feb 2011 B1
7891815 Nayar et al. Feb 2011 B2
7911671 Rabb Mar 2011 B2
7927654 Hagood et al. Apr 2011 B2
7978407 Connor Jul 2011 B1
8134779 Roh et al. Mar 2012 B2
8149348 Yun et al. Apr 2012 B2
8159428 Hagood et al. Apr 2012 B2
8174632 Kim et al. May 2012 B2
8179424 Moller May 2012 B2
8189039 Hiddink et al. May 2012 B2
8242974 Yamazaki et al. Aug 2012 B2
8248560 Kim et al. Aug 2012 B2
8262274 Kim et al. Sep 2012 B2
8310442 Hagood et al. Nov 2012 B2
8363100 Lu Jan 2013 B2
8402502 Meuninck et al. Mar 2013 B2
8441602 Kim et al. May 2013 B2
8446559 Kim et al. May 2013 B2
8482496 Lewis Jul 2013 B2
8519923 Hagood, IV et al. Aug 2013 B2
8519945 Hagood et al. Aug 2013 B2
8520285 Fike, III et al. Aug 2013 B2
8526096 Steyn et al. Sep 2013 B2
8545048 Kang et al. Oct 2013 B2
8545084 Kim et al. Oct 2013 B2
8558961 Yun et al. Oct 2013 B2
8587498 Connor Nov 2013 B2
8587635 Hines et al. Nov 2013 B2
8593574 Ansari et al. Nov 2013 B2
8599463 Wu et al. Dec 2013 B2
8640182 Bedingfield, Sr. Jan 2014 B2
8651684 Mehrle Feb 2014 B2
8651726 Robinson Feb 2014 B2
8659830 Brott et al. Feb 2014 B2
8675125 Cossairt et al. Mar 2014 B2
8711062 Yamazaki et al. Apr 2014 B2
8736675 Holzbach et al. May 2014 B1
8786685 Sethna et al. Jul 2014 B1
8817082 Van Der Horst et al. Aug 2014 B2
8860790 Ericson et al. Oct 2014 B2
8891152 Fike, III et al. Nov 2014 B2
8897542 Wei Nov 2014 B2
8917441 Woodgate et al. Dec 2014 B2
8918831 Meuninck et al. Dec 2014 B2
8937767 Chang et al. Jan 2015 B2
8947385 Ma et al. Feb 2015 B2
8947497 Hines et al. Feb 2015 B2
8947511 Friedman Feb 2015 B2
8964009 Yoshida Feb 2015 B2
8988343 Fei et al. Mar 2015 B2
8994716 Malik Mar 2015 B2
9001423 Woodgate et al. Apr 2015 B2
9024927 Koyama May 2015 B2
9030522 Hines et al. May 2015 B2
9030536 King et al. May 2015 B2
9032470 Meuninck et al. May 2015 B2
9049426 Costa et al. Jun 2015 B2
9082353 Lewis et al. Jul 2015 B2
9086778 Friedman Jul 2015 B2
9087486 Gandhi et al. Jul 2015 B2
9116344 Wu et al. Aug 2015 B2
9128277 Steyn et al. Sep 2015 B2
9134552 Ni Chleirigh et al. Sep 2015 B2
9135868 Hagood, IV et al. Sep 2015 B2
9158106 Hagood et al. Oct 2015 B2
9160968 Hines et al. Oct 2015 B2
9167205 Hines et al. Oct 2015 B2
9176318 Hagood et al. Nov 2015 B2
9177523 Hagood et al. Nov 2015 B2
9182587 Brosnihan et al. Nov 2015 B2
9182604 Cossairt et al. Nov 2015 B2
9188731 Woodgate et al. Nov 2015 B2
9229222 Hagood et al. Jan 2016 B2
9232274 Meuninck et al. Jan 2016 B2
9235057 Robinson et al. Jan 2016 B2
9237337 Ramsey et al. Jan 2016 B2
9243774 Kim et al. Jan 2016 B2
9247228 Malik Jan 2016 B2
9250448 Robinson Feb 2016 B2
9261641 Sykora et al. Feb 2016 B2
9261694 Payne et al. Feb 2016 B2
20030067421 Sullivan Apr 2003 A1
20030197933 Sudo et al. Oct 2003 A1
20040165264 Uehara et al. Aug 2004 A1
20040174604 Brown Sep 2004 A1
20040192430 Burak et al. Sep 2004 A1
20050059487 Wilder et al. Mar 2005 A1
20050083400 Hirayama et al. Apr 2005 A1
20050111100 Mather et al. May 2005 A1
20050190443 Nam et al. Sep 2005 A1
20060023065 Alden Feb 2006 A1
20060039181 Yang et al. Feb 2006 A1
20060044987 Anderson et al. Mar 2006 A1
20060078180 Berretty Apr 2006 A1
20060244918 Cossairt et al. Nov 2006 A1
20070146358 Ijzerman Jun 2007 A1
20070165305 Mehrle Jul 2007 A1
20070222954 Hattori Sep 2007 A1
20070229778 Cha et al. Oct 2007 A1
20070255139 Deschinger et al. Nov 2007 A1
20070268590 Schwerdtner Nov 2007 A1
20080079662 Saishu et al. Apr 2008 A1
20080094853 Kim et al. Apr 2008 A1
20080123182 Cernasov May 2008 A1
20080204550 De Zwart et al. Aug 2008 A1
20080211734 Huitema et al. Sep 2008 A1
20080225114 De Zwart et al. Sep 2008 A1
20080247042 Scwerdtner Oct 2008 A1
20080259233 Krijn et al. Oct 2008 A1
20080281767 Garner Nov 2008 A1
20080291267 Leveco et al. Nov 2008 A1
20080316303 Chiu et al. Dec 2008 A1
20080316604 Redert et al. Dec 2008 A1
20090002335 Chaudhri Jan 2009 A1
20090051759 Adkins et al. Feb 2009 A1
20090217209 Chen et al. Aug 2009 A1
20090309887 Moller et al. Dec 2009 A1
20090309958 Hamagishi et al. Dec 2009 A1
20100007582 Zalewski Jan 2010 A1
20100026795 Moller et al. Feb 2010 A1
20100026797 Meuwissen et al. Feb 2010 A1
20100033813 Rogoff Feb 2010 A1
20100097687 Lipovetskaya et al. Apr 2010 A1
20100110316 Huang et al. May 2010 A1
20100165081 Jung et al. Jul 2010 A1
20100245548 Sasaki et al. Sep 2010 A1
20100309290 Myers Dec 2010 A1
20110000971 Onderko Jan 2011 A1
20110013258 Lee et al. Jan 2011 A1
20110032483 Hruska et al. Feb 2011 A1
20110074773 Jung Mar 2011 A1
20110085094 Kao et al. Apr 2011 A1
20110109629 Ericson et al. May 2011 A1
20110149030 Kang et al. Jun 2011 A1
20110188773 Wei Aug 2011 A1
20110210964 Chiu et al. Sep 2011 A1
20110234605 Smith et al. Sep 2011 A1
20110246877 Kwak et al. Oct 2011 A1
20110249026 Singh Oct 2011 A1
20110254929 Yang et al. Oct 2011 A1
20110291945 Ewing, Jr. et al. Dec 2011 A1
20110316679 Pihlaja Dec 2011 A1
20120013606 Tsai et al. Jan 2012 A1
20120019883 Chae et al. Jan 2012 A1
20120026586 Chen Feb 2012 A1
20120050262 Kim et al. Mar 2012 A1
20120057006 Joseph et al. Mar 2012 A1
20120057229 Kikuchi et al. Mar 2012 A1
20120062549 Woo et al. Mar 2012 A1
20120069019 Richards Mar 2012 A1
20120069146 Lee Mar 2012 A1
20120081359 Lee et al. Apr 2012 A1
20120102436 Nurmi Apr 2012 A1
20120113018 Yan May 2012 A1
20120120063 Ozaki May 2012 A1
20120154559 Voss et al. Jun 2012 A1
20120202187 Brinkerhoff, III Aug 2012 A1
20120206484 Hauschild et al. Aug 2012 A1
20120223879 Winter Sep 2012 A1
20120229450 Kim et al. Sep 2012 A1
20120229519 Stallings et al. Sep 2012 A1
20120229718 Huang et al. Sep 2012 A1
20120249836 Ali Oct 2012 A1
20120256096 Heimlicher et al. Oct 2012 A1
20120262398 Kim et al. Oct 2012 A1
20120274626 Hsieh Nov 2012 A1
20120274634 Yamada et al. Nov 2012 A1
20130027390 Kim et al. Jan 2013 A1
20130076746 Chung Mar 2013 A1
20130102249 Tanaka Apr 2013 A1
20130120543 Chen May 2013 A1
20130202221 Tsai Aug 2013 A1
20140035902 An et al. Feb 2014 A1
20140036173 Chang Feb 2014 A1
20140132726 Jung May 2014 A1
20140192172 Kang et al. Jul 2014 A1
20140304310 Gerbasi Oct 2014 A1
20140355302 Wilcox et al. Dec 2014 A1
20150070481 S. et al. Mar 2015 A1
20150185957 Weng et al. Jul 2015 A1
20150226972 Wang Aug 2015 A1
20150232065 Ricci et al. Aug 2015 A1
20150260999 Wang et al. Sep 2015 A1
Foreign Referenced Citations (3)
Number Date Country
2012068532 May 2012 WO
2013109252 Jul 2013 WO
2015026017 Feb 2015 WO
Non-Patent Literature Citations (2)
Entry
International Search Report and Written Opinion, PCT International Patent App. No. PCT/US2016/061313, Craig Peterson, dated Jan. 19, 2017, 22 pgs.
International Bureau of WIPO; International Preliminary Report on Patentability, dated Aug. 30, 2018, for PCT App. No. PCT/US2017/016240 filed Feb. 2, 2017; 8 pages.
Related Publications (1)
Number Date Country
20180139432 A1 May 2018 US
Provisional Applications (1)
Number Date Country
61897787 Oct 2013 US
Continuations (1)
Number Date Country
Parent 14522278 Oct 2014 US
Child 15855756 US