Method and apparatus for converting 2D-images and videos to 3D for consumer, commercial and professional applications

Information

  • Patent Grant
  • 9967546
  • Patent Number
    9,967,546
  • Date Filed
    Thursday, October 23, 2014
    10 years ago
  • Date Issued
    Tuesday, May 8, 2018
    6 years ago
  • Inventors
    • Barkatullah; Javed Sabir (Portland, OR, US)
  • Original Assignees
  • Examiners
    • Navas, Jr.; Edemio
    Agents
    • Chernoff Vilhauer, LLC
Abstract
A method for converting a 2D images and videos to 3D includes applying a set of pre-defined heuristic rules to assign a depth value for each pixel of a two-dimensional (2D) image source based on pixel attributes to generate an initial default depth map, refining the pre-defined heuristic rules to produce customized heuristic rules, applying the customized heuristic rules to the initial default depth map to produce a refined depth map, and rendering a three-dimensional (3D) image in a predefined format using the refined depth map.
Description
FIELD OF THE INVENTION

Embodiments here relate generally to the field of 2D to 3D video and image conversion performed either in real time or offline, with application in consumer image/video editing software, consumer 3D display devices such as TVs, game consoles, mobile devices etc., consumer satellite and cable boxes, electronic billboards and displays for commercial advertisement, and post-production professional video editing software or solution for converting existing 2D movies and videos to 3D. More particularly, embodiments relate to a method and apparatus for extracting depth information automatically and/or semi-automatically from various visual cues in a monocular image and using the said depth information to render the image in 3D for different 3D display technologies and formats.


BACKGROUND

The rising sale of 3D-enabled TVs and personal devices in the consumer segment, releasing of new and old movies in 3D and increasing use of large screen electronic billboards which can display attention grabbing 3D-images for advertising or informational purposes, has increased the need for creating 3D-content. The ability to convert existing 2D content to 3D content automatically or with limited manual intervention can result in large cost and time saving and will grow the 3D-content creation market even further.


Traditionally, converting 2D videos to 3D for professional application consists of very labor intensive process of roto-scoping where objects in each frame are manually and painstakingly traced by the artist and depth information for each object is painted by hand. This traditional 2D to 3D conversion suffers from disadvantages. Depending on the complexity of the scene in each frame, it may take several hours to several days to generate a depth map of a single frame. A 2-hour movie at 24 frames per second may contain up to one hundred thousand unique frames and this manual depth map creation can cost upwards of $200 per frame. Consequently, this method is very expensive and slow.


On the low end of the 2D to 3D conversion, consumer 3D-TV sets have built in hardware that can automatically convert 2D video or image into 3D in real time. However, the 3D quality is extremely poor with hardly any depth effect in the converted 3D-image. Such fully automated method is obviously not acceptable by professional movie post-production houses.


There have been numerous research publications on methods of automatically generating depth map from a mono-ocular 2D-image for the purpose of converting the 2D-image to 3D-image. The methods range from very simplistic heuristics to very complicated and compute intensive image analysis. Simple heuristics may be suitable for real time conversion application but provides poor 3D quality. On the other hand, complex mathematical analysis may provide good 3D-image quality but may not be suitable for real time application and hardware implementation.


A solution to this quality versus difficulty dilemma is to start with an automated default lower quality 3D-image and provide ability to add additional manual editing capabilities to enhance the 3D image quality.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows an exemplary block diagram of the system, according to one embodiment of the invention



FIG. 2 shows an exemplary transformation of an image frame as it is processed in the system pipeline.



FIG. 3 and FIG. 4 illustrate two exemplary graphical user interfaces (GUI) for user to add or modify rules for depth map estimation, according to one software embodiment of the invention.



FIG. 5 illustrates a graphical user interface (GUI) for user to control depth map filters, according to one embodiment of the invention.



FIG. 6 illustrates an exemplary method for generating depth map from the left and right eye views of a stero-3D image by finding disparity between left and right views for each object, according to one embodiment of the invention.



FIG. 7 illustrates a flow chart for computing depth map from a 2D image source, according to one embodiment of the invention.



FIG. 8 illustrates a flow chart for additional processing and filtering depth map to enhance and/or exaggerate 3D-effects, according to one embodiment of the invention.



FIG. 9 illustrates a flow chart for computing depth map from a 3D-stero image source which contains a left eye view and a right eye view of the scene, according to one embodiment of the invention.



FIG. 10 illustrates a system diagram, according to one embodiment of the invention.





DETAILED DESCRIPTION

Embodiments of the present invention relate to a method, apparatus, system, and computer program for generating automatically using set of pre-defined heuristic rules to generate a depth map from mono-ocular (2D) image source. Optionally, in a semi-manual mode, user can augment or replace pre-defined heuristic rules with user defined rules to generate superior quality depth map. The said depth map in conjunction with the original 2D image source can be used to generate 3D image in any format desired. The embodiments of the invention can take advantage of the computing power of general purpose CPU, GPU or dedicated FPGA or ASIC chip to process sequence of images from video frames of a streaming 2D-video to generate 3D video frames. Depending on the available processing capabilities of the processing unit and complexity and size of pre-defined rules, the conversion of 2D video frames to 3D can be done in real time in automatic mode.


In one embodiment, the 2D to 3D conversion algorithm is implemented as a software application running on a computing device, such as a personal computer, tablet computer or smart-phone. A user receives a streaming 2D-video from the Internet or from a file stored on a local storage device and uses the application to automatically convert the 2D video to 3D and display it on the attached 3D display in real time. In one embodiment, the converted 3D-video can be stored back on the local or network storage device. In one embodiment, the user can modify or augment the pre-defined heuristic rules for depth map estimation and depth map filters to produce user-desired quality and format of 3D-image. In one embodiment, the user can save the custom heuristic rules for each 2D-image or a sequence of 2D-images in a control file using some pre-defined syntax such as XML and can play the said control file together with the 2D-image or 2D-image sequence to reproduce the 3D-image or image sequences or the depth map for the image or image sequences.


In one embodiment, the 2D to 3D conversion process is implemented as a software application running on a computing device such as a personal computer, tablet computer or smart-phone. A user loads a video from a file stored on a local or network attached storage device and uses the application to automatically or in an interactive mode convert the 2D video to 3D and store it back offline on the local or network attached disk. In one embodiment, the user can adjust or augment the pre-defined heuristic rules for depth map estimation and depth map filters to produce user-desired quality and format of 3D-image. In one embodiment the user can adjust existing rules and add new rules through graphical user interface (GUI) of the application. In one embodiment, the user modified or added rules can be stored in a control file using some pre-defined syntax such as XML and can be read in by the 2D to 3D conversion application and applied in the conversion.


In one embodiment, the 2D to 3D conversion algorithm is implemented in dedicated hardware such as an FPGA (field programmable gate array) or custom ASIC (application specific integrated circuit) chip. In one embodiment, the entire 2D to 3D video conversion system is implemented as a stand-alone converter box. In one embodiment, the entire 2D to 3D video conversion system is implemented a circuit board or a daughter card. In one embodiment, a stand-alone implantation of the conversion system can be attached to the output of a streaming video receiver, broadcast TV receiver, satellite-TV receiver or cable-TV receiver and the output of standalone converter box can be connected to 3D-displays.


In one embodiment, the 2D to 3D conversion algorithm is implemented as a software application utilizing on the graphics processing unit (GPU) of a computing device such as a personal computer, tablet computer or smart-phone to enhance performance.



FIG. 1 shows an exemplary block diagram of the 2D to 3D conversion process, according to one embodiment of the invention. In one embodiment, the process comprises of receiving single or a sequence of image frames. Each pixel of the image frame, singularly or as a group, is analyzed. Based upon either default depth rules or user specified depth rules, the process assigns a depth value to the pixels. In one embodiment, the depth value of the entire frame is stored as grey scale depth map image. In one embodiment, the raw depth map image is further processed and filtered according to default rules and/or user defined rules. In one embodiment, the processed depth map image is applied to the original 2D-image to calculate pixel displacements in by the render engine. Default and or user adjustments are applied to fine tune the 3D-rendering of the original 2D-image for the 3D-display device.


Referring back to FIG. 1, in one embodiment the system comprises a 2D-video source 101 that can stream video either from local or remote source. The depth estimator 102 estimates the depth of each pixel in the image frame using default set of rules stored in a rules database 104. An example of a default rule will be “if the position of the pixel is in upper third of the image frame and the color is within certain range of blue, and the intensity is greater than 60% then assign depth for this pixel the value for sky.” In one embodiment, the user can input additional rules interactively at 103 or through a file as illustrated by 104. In one embodiment, output raw depth map 112 from 104 can be further refined, filtered and processed by depth enhancer 106 using default rule sets from 105 or user defined rule sets from 107. In one embodiment, the output refined depth map 113 from 106 is used with the original 2D-image 111 by the render engine 108 to produce a 3D-image 119. The rendering may be controlled by the user at 110.



FIG. 2 illustrates one embodiment of the images 111, 112, 113 and 119 as they go through transformation from one processing block to the next. The original image 111 comes from the 2D video source 101. The image 112 results from the depth estimator 102. The depth map enhancer 106 produces the image 113. The process then renders the image 119 on the display 109.



FIG. 3 illustrates one embodiment of graphical user interface (GUI) 201 to enable the user to enter depth rules consisting of color, intensity and location of the pixel within the image frame. Block 202 illustrates one embodiment of specifying pixel color range as RGB values with offsets and intensity value. Block 203 illustrates one embodiment of bounding box region for the rule to apply. 201 also illustrates an embodiment of a preview window showing the result of applying the rules on depth map.



FIG. 4 illustrates one embodiment of graphical user interface (GUI) 204 to enable the user to enter depth rules consisting of hue, saturation and intensity of the pixel within the image frame. The user makes these inputs through a series of sliders, or other user interface devices in 205.



FIG. 5 illustrates one embodiment of graphical user interface (GUI) 206 to enable the user to enter depth map filtering and processing. The GUI 206 also illustrates an embodiment of a preview window showing the result of applying the rules on depth map.



FIG. 6 illustrates one embodiment of graphical user interface (GUI) 207 to enable the user to identify and associate similar objects in the left and right eye views manually using mouse selection operation. The user input region 208 also illustrates an embodiment showing the disparity between the same object in left and right eye views and the process uses this disparity to calculate depth value for pixels within the object.



FIG. 7 shows a flowchart of one embodiment of a method to calculate the depth of each pixel with in the image frame. The process starts with the received 2D video frame at 301. At 302 the process takes a pixel from the image, initialized a counter i and compares the pixel attributes against some or all the depth map rules in 304. If the rule specified attributes are found in the pixel, the pixel depth is calculated using the matching rule, as shown in block 305. If no rule matches pixel, the counter is incremented, checked to ensure it is less than a threshold count N, and a default depth value is assigned as shown in 308. This process continues until all of the pixels in the frame are processed at 309, producing the enhanced depth map at 310.



FIG. 8 shows a flow chart of one embodiment of a method to enhance a depth map image. Various default and or user specified filter operations can be applied to post process the raw depth map generated. The depth map is received, such as from 309 in the previous process, although the depth map may be produced by other means. Again, a counter is initialized at 402. If the counter is below a previously decided count at 403, the process moves to applying the filter for that iteration to the depth map at 405. The counter is then incremented at 406 and the process returns to 403. If the counter reaches its final count at 403, he generated depth map can be optionally saved as a grey-scale image, as shown in block 407. The 3D image is then rendered at 408.



FIG. 9 illustrates one embodiment of a block diagram for estimating a depth map from a stereo 3D-image, which may result from a process other than that discussed with regard to FIG. 3. The stereo 3D image consists of a left eye view and a right eye view of the scene and is received at 501. Initially, the depth may is assumed to have some default depth at 502. Similar objects, referred to here as ‘blobs’ from the left and right eye views are identified either automatically using some attributes such as color, intensity, size and location or manually by user defined instructions. These blobs are added to the blob list. The user defined blob matches if they exist at 506, result in an update to the blob list at 508. The process then generates a depth map value for that pixel at 509, which eventually results in the entire depth map used at 401 in FIG. 8. The disparity between and left and right eye view for the same object is a direct measure of the depth of the object, and this disparity data is used to estimate the depth of each pixel with in the object.



FIG. 10 illustrates a system diagram, according to one embodiment of the invention. The instructions such as 614 for the method flow charts described above are stored on a memory 612 as machine-readable instructions that when executed cause a processor such as 608 in a specific system to execute the instructions. In one embodiment, the system is a mobile device. In another embodiment, the system is the stand alone computer. In another embodiment the system is an embedded processor in a larger system. Elements of embodiments are provided as a machine-readable storage medium for storing the computer-executable instructions. The machine-readable storage medium may include, but is not limited to, flash memory, optical disks, CD-ROMs, DVD ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, or other type of machine-readable storage media suitable for storing electronic or computer-executable instructions including disk storage 610. For example, embodiments of the invention may be downloaded as a computer program which may be transferred from a remote computer to a requesting computer by way of data signals via a communication link 602 coupled to a network interface 604 for the requesting computer. The processor 608 executes the instructions to render the 3D image on the display 616.


Reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments. The various appearances of “an embodiment,” “one embodiment,” or “some embodiments” are not necessarily all referring to the same embodiments. If the specification states a component, feature, structure, or characteristic “may,” “might,” or “could” be included, that particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to “a” or “an” element, that does not mean there is only one of the element. If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional element.


While the invention has been described in conjunction with specific embodiments thereof, many alternatives, modifications and variations of such embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description.

Claims
  • 1. A method for converting 2D images and videos to 3D comprising: (a) applying a set of pre-defined heuristic rules to assign a depth value for a set of pixels of a two-dimensional (2D) image source based on pixel attributes to generate an initial default depth map where said initial default depth map is generated based upon; (i) selecting a first one of said pixels of said two-dimensional (2D) image and setting a counter;(ii) comparing said first one of said pixels of said two-dimensional (2D) image against said set of pre-defined heuristic rules based upon said pixel attributes;(iii) if said comparing said first one of said pixels based upon said pixel attributes is found in said first one of said pixels then said depth value for said first pixel is assigned;(iv) if said comparing said first one of said pixels based upon said pixel attributes is not found in said first one of said pixels then incrementing said counter;(v) if said counter is less than a threshold then selecting another pixel of said pixels of said two-dimensional (2D) image;(vi) if said counter is greater than a threshold then assigning a default said depth value for said first one of said pixels;(vii) repeating said selecting, said comparing of steps (i) through (vi) for said set of said pixels of said two-dimensional (2D) image;(viii) wherein said set of pre-defined heuristic rules include (a) a first rule based upon color of said pixels of said two-dimensional (2D) image, (b) a second rule based upon at individual intensity of individual said pixels of said two-dimensional (2D) image in a manner independently of said individual intensity of other said pixels of said two-dimensional (2D) image, (c) a third rule based upon a location of said pixels of said two-dimensional (2D) image where different locations of said pixels within said two-dimensional (2D) image have different heuristic rules based upon their respective locations, (d) and a rectangular bounding box of said pixels of said two-dimensional (2D) image;(b) refining the pre-defined heuristic rules to produce customized heuristic rules;(c) applying the customized heuristic rules to the initial default depth map to produce a refined depth map; and(d) rendering a three-dimensional (3D) image in a predefined format using the refined depth map.
  • 2. The method of claim 1, wherein the pixel attributes comprise at least one of position, color, intensity, and adjacent pixel attributes.
  • 3. The method of claim 1, wherein refining the pre-defined heuristic rules comprises receiving a set of user defined rules to one of augment or replace the pre-defined heuristic rules.
  • 4. The method of claim 1, wherein refining the initial depth map comprises manual selection of regions in the original 2D image based on at least one of pixel position, color, intensity, initial depth value range, assigning depth values for pixels in the regions, and modifying depth values for pixels in the regions.
  • 5. The method of claim 1, further comprising scanning a stereo, three dimensional image having two views for same objects within the two views and calculating depth value based on horizontal displacements between the same objects in the two views.
  • 6. The method of claim 1, wherein refining the initial depth map comprises performing image processing and filtering.
  • 7. The method of claim 1, further comprising saving the refined depth map as a grey scale image.
  • 8. The method of claim 7, wherein rendering the 3D image comprises using the grey scale image.
  • 9. The method of claim 1, further comprising saving the customized heuristics as a control file.
  • 10. The method of claim 9, wherein rendering the 3D image comprises using the control file to render the image.
  • 11. The method of claim 1, wherein the method comprises instructions stored in a memory to be executed by a processor.
  • 12. The method of claim 1, wherein the method is executed by a dedicated hardware component comprising one of an FPGA, an ASIC chip, and a dedicated functional unit within a processor.
  • 13. The method of claim 1, wherein the method is executed by a stand alone converter box.
  • 14. The method of claim 1, wherein the method is performed by a component of computing device comprises one of a circuit board, a daughter card, and a plug-in card.
  • 15. The method of claim 1, wherein receiving the 2D image comprises receiving a 2D image from one of an output of a streaming video receiver, a broadcast TV receiver, satellite-TV receiver and a cable-TV.
  • 16. The method of claim 1, wherein rendering the 3D image comprises rendering the 3D image as one of a stereo 3D image, an auto-stereo 3D image and an anaglyph.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims benefit of U.S. provisional patent application Ser. No 61/897,106, filed Oct. 29, 2013, which is herein incorporated by reference.

US Referenced Citations (320)
Number Name Date Kind
4180313 Innuiya Dec 1979 A
5465175 Woodgate et al. Nov 1995 A
5650876 Davies et al. Jul 1997 A
5654810 Okamura et al. Aug 1997 A
5663831 Mashitani et al. Sep 1997 A
5731853 Taketomi et al. Mar 1998 A
5731899 Meyers Mar 1998 A
5751383 Yamanaka May 1998 A
5757545 Wu et al. May 1998 A
5771121 Hentschke Jun 1998 A
5781229 Zediker et al. Jul 1998 A
5808797 Bloom Sep 1998 A
5822125 Meyers Oct 1998 A
5825552 Kurtz et al. Oct 1998 A
5831765 Nakayama et al. Nov 1998 A
5841579 Bloom et al. Nov 1998 A
5852512 Chikazawa Dec 1998 A
5855425 Hamagishi Jan 1999 A
5864375 Taketomi et al. Jan 1999 A
5894364 Nagatani Apr 1999 A
5896225 Chikazawa Apr 1999 A
5914805 Crowley Jun 1999 A
5943166 Hoshi et al. Aug 1999 A
5969850 Harrold et al. Oct 1999 A
5969872 Ben Oren et al. Oct 1999 A
5982553 Bloom et al. Nov 1999 A
5986804 Mashitani et al. Nov 1999 A
5991074 Nose et al. Nov 1999 A
5993003 McLaughlin Nov 1999 A
5993004 Moseley et al. Nov 1999 A
6014164 Woodgate et al. Jan 2000 A
6014187 Taketomi et al. Jan 2000 A
6020931 Bilbrey et al. Feb 2000 A
6040807 Hamagishi et al. Mar 2000 A
6048081 Richardson Apr 2000 A
6049352 Allio Apr 2000 A
6061083 Aritake et al. May 2000 A
6064424 van Berkel et al. May 2000 A
6088102 Manhart Jul 2000 A
6097554 Watkins Aug 2000 A
6101036 Bloom Aug 2000 A
6130770 Bloom Oct 2000 A
6151062 Inoguchi et al. Nov 2000 A
6157402 Torgeson Dec 2000 A
6188518 Martin Feb 2001 B1
6215579 Bloom et al. Apr 2001 B1
6215590 Okano Apr 2001 B1
6219184 Nagatani Apr 2001 B1
6224214 Martin et al. May 2001 B1
6254246 Tiao et al. Jul 2001 B1
6259450 Chiabrera et al. Jul 2001 B1
6266106 Murata et al. Jul 2001 B1
6266176 Anderson et al. Jul 2001 B1
6271808 Corbin Aug 2001 B1
6304263 Chiabrera et al. Oct 2001 B1
6337721 Hamagishi et al. Jan 2002 B1
6381072 Burger Apr 2002 B1
6385882 Conley et al. May 2002 B1
6388815 Collins, Jr. et al. May 2002 B1
6445406 Taniguchi et al. Sep 2002 B1
6462871 Morishima Oct 2002 B1
6481849 Martin et al. Nov 2002 B2
6525889 Collins, Jr. et al. Feb 2003 B1
6533420 Eichenlaub Mar 2003 B1
6547628 Long Apr 2003 B1
6574047 Hawver Jun 2003 B2
6674939 Anderson et al. Jan 2004 B1
6697042 Cohen et al. Feb 2004 B1
6700701 Son et al. Mar 2004 B1
6707591 Amm Mar 2004 B2
6712480 Leung et al. Mar 2004 B1
6714173 Shinoura Mar 2004 B2
6724951 Anderson et al. Apr 2004 B1
6727866 Wang et al. Apr 2004 B2
6728023 Alioshin et al. Apr 2004 B1
6736512 Balogh May 2004 B2
6747781 Trisnadi Jun 2004 B2
6760140 Argueta-Diaz et al. Jul 2004 B1
6764875 Shook Jul 2004 B2
6766073 Anderson Jul 2004 B1
6767751 Hunter Jul 2004 B2
6775048 Starkweather et al. Aug 2004 B1
6782205 Trisnadi et al. Aug 2004 B2
6791570 Schwerdtner et al. Sep 2004 B1
6795250 Johnson et al. Sep 2004 B2
6800238 Miller Oct 2004 B1
6801354 Payne et al. Oct 2004 B1
6806997 Dueweke et al. Oct 2004 B1
6813059 Hunter et al. Nov 2004 B2
6822797 Carlisle et al. Nov 2004 B1
6829077 Maheshwari Dec 2004 B1
6829092 Amm et al. Dec 2004 B2
6829258 Carlisle et al. Dec 2004 B1
6877882 Haven et al. Apr 2005 B1
7047019 Cox May 2006 B1
7139042 Nam et al. Nov 2006 B2
7154653 Kean et al. Dec 2006 B2
7161614 Yamashita et al. Jan 2007 B1
7168249 Starkweather et al. Jan 2007 B2
7215474 Argueta-Diaz May 2007 B2
7236238 Durresi et al. Jun 2007 B1
7271945 Hagood et al. Sep 2007 B2
7286280 Whitehead et al. Oct 2007 B2
7295264 Kim Nov 2007 B2
7298552 Redert Nov 2007 B2
7304785 Hagood et al. Dec 2007 B2
7304786 Hagood et al. Dec 2007 B2
7311607 Tedsen et al. Dec 2007 B2
7365897 Hagood et al. Apr 2008 B2
7405852 Brosnihan et al. Jul 2008 B2
7417782 Hagood et al. Aug 2008 B2
7425069 Schwerdtner et al. Sep 2008 B2
7430347 Anderson et al. Sep 2008 B2
7432878 Nayar et al. Oct 2008 B1
7450304 Sakai et al. Nov 2008 B2
7502159 Hagood, IV et al. Mar 2009 B2
7518663 Cornelissen Apr 2009 B2
7551344 Hagood et al. Jun 2009 B2
7551353 Kim et al. Jun 2009 B2
7614748 Nayar et al. Nov 2009 B2
7616368 Hagood, IV Nov 2009 B2
7619806 Hagood, IV et al. Nov 2009 B2
7630598 Anderson et al. Dec 2009 B2
7633670 Anderson et al. Dec 2009 B2
7636189 Hagood, IV et al. Dec 2009 B2
7651282 Zomet et al. Jan 2010 B2
7660499 Anderson et al. Feb 2010 B2
7675665 Hagood et al. Mar 2010 B2
7703924 Nayar Apr 2010 B2
7742016 Hagood et al. Jun 2010 B2
7746529 Hagood et al. Jun 2010 B2
7750982 Nelson et al. Jul 2010 B2
7755582 Hagood et al. Jul 2010 B2
7817045 Onderko Oct 2010 B2
7839356 Hagood et al. Nov 2010 B2
7852546 Fijol et al. Dec 2010 B2
7857700 Wilder et al. Dec 2010 B2
7864419 Cossairt et al. Jan 2011 B2
7876489 Gandhi et al. Jan 2011 B2
7889425 Connor Feb 2011 B1
7891815 Nayar et al. Feb 2011 B2
7911671 Rabb Mar 2011 B2
7927654 Hagood et al. Apr 2011 B2
7978407 Connor Jul 2011 B1
8134779 Roh et al. Mar 2012 B2
8149348 Yun et al. Apr 2012 B2
8159428 Hagood et al. Apr 2012 B2
8174632 Kim et al. May 2012 B2
8179424 Moller May 2012 B2
8189039 Hiddink et al. May 2012 B2
8242974 Yamazaki et al. Aug 2012 B2
8248560 Kim et al. Aug 2012 B2
8262274 Kim et al. Sep 2012 B2
8310442 Hagood et al. Nov 2012 B2
8363100 Lu Jan 2013 B2
8402502 Meuninck et al. Mar 2013 B2
8441602 Kim et al. May 2013 B2
8446559 Kim et al. May 2013 B2
8482496 Lewis Jul 2013 B2
8519923 Hagood, IV et al. Aug 2013 B2
8519945 Hagood et al. Aug 2013 B2
8520285 Fike, III et al. Aug 2013 B2
8526096 Steyn et al. Sep 2013 B2
8545048 Kang et al. Oct 2013 B2
8545084 Kim et al. Oct 2013 B2
8558961 Yun et al. Oct 2013 B2
8587498 Connor Nov 2013 B2
8587635 Hines et al. Nov 2013 B2
8593574 Ansari et al. Nov 2013 B2
8599463 Wu et al. Dec 2013 B2
8640182 Bedingfield, Sr. Jan 2014 B2
8651684 Mehrle Feb 2014 B2
8651726 Robinson Feb 2014 B2
8659830 Brott et al. Feb 2014 B2
8675125 Cossairt et al. Mar 2014 B2
8711062 Yamazaki et al. Apr 2014 B2
8736675 Holzbach et al. May 2014 B1
8786685 Sethna et al. Jul 2014 B1
8817082 Van Der Horst et al. Aug 2014 B2
8860790 Ericson et al. Oct 2014 B2
8891152 Fike, III et al. Nov 2014 B2
8897542 Wei Nov 2014 B2
8917441 Woodgate et al. Dec 2014 B2
8918831 Meuninck et al. Dec 2014 B2
8937767 Chang et al. Jan 2015 B2
8947385 Ma et al. Feb 2015 B2
8947497 Hines et al. Feb 2015 B2
8947511 Friedman Feb 2015 B2
8964009 Yoshida Feb 2015 B2
8988343 Fei et al. Mar 2015 B2
8994716 Malik Mar 2015 B2
9001423 Woodgate et al. Apr 2015 B2
9024927 Koyama May 2015 B2
9030522 Hines et al. May 2015 B2
9030536 King et al. May 2015 B2
9032470 Meuninck et al. May 2015 B2
9049426 Costa et al. Jun 2015 B2
9082353 Lewis et al. Jul 2015 B2
9086778 Friedman Jul 2015 B2
9087486 Gandhi et al. Jul 2015 B2
9116344 Wu et al. Aug 2015 B2
9128277 Steyn et al. Sep 2015 B2
9134552 Ni Chleirigh et al. Sep 2015 B2
9135868 Hagood, IV et al. Sep 2015 B2
9158106 Hagood et al. Oct 2015 B2
9160968 Hines et al. Oct 2015 B2
9167205 Hines et al. Oct 2015 B2
9176318 Hagood et al. Nov 2015 B2
9177523 Hagood et al. Nov 2015 B2
9182587 Brosnihan et al. Nov 2015 B2
9182604 Cossairt et al. Nov 2015 B2
9188731 Woodgate et al. Nov 2015 B2
9229222 Hagood et al. Jan 2016 B2
9232274 Meuninck et al. Jan 2016 B2
9235057 Robinson et al. Jan 2016 B2
9237337 Ramsey et al. Jan 2016 B2
9243774 Kim et al. Jan 2016 B2
9247228 Malik Jan 2016 B2
9250448 Robinson Feb 2016 B2
9261641 Sykora et al. Feb 2016 B2
9261694 Payne et al. Feb 2016 B2
20030067421 Sullivan Apr 2003 A1
20030197933 Sudo et al. Oct 2003 A1
20040165264 Uehara et al. Aug 2004 A1
20040174604 Brown Sep 2004 A1
20040192430 Burak et al. Sep 2004 A1
20050059487 Wilder et al. Mar 2005 A1
20050083400 Hirayama et al. Apr 2005 A1
20050111100 Mather et al. May 2005 A1
20050190443 Nam et al. Sep 2005 A1
20060023065 Alden Feb 2006 A1
20060039181 Yang et al. Feb 2006 A1
20060044987 Anderson et al. Mar 2006 A1
20060078180 Berretty Apr 2006 A1
20060244918 Cossairt et al. Nov 2006 A1
20070146358 Ijzerman Jun 2007 A1
20070165305 Mehrle Jul 2007 A1
20070222954 Hattori Sep 2007 A1
20070229778 Cha et al. Oct 2007 A1
20070255139 Deschinger et al. Nov 2007 A1
20070268590 Schwerdtner Nov 2007 A1
20080079662 Saishu et al. Apr 2008 A1
20080094853 Kim et al. Apr 2008 A1
20080123182 Cernasov May 2008 A1
20080204550 De Zwart et al. Aug 2008 A1
20080211734 Huitema et al. Sep 2008 A1
20080225114 De Zwart et al. Sep 2008 A1
20080247042 Scwerdtner Oct 2008 A1
20080259233 Krijn et al. Oct 2008 A1
20080281767 Gamer Nov 2008 A1
20080291267 Leveco et al. Nov 2008 A1
20080316303 Chiu et al. Dec 2008 A1
20080316604 Redert et al. Dec 2008 A1
20090002335 Chaudhri Jan 2009 A1
20090051759 Adkins et al. Feb 2009 A1
20090217209 Chen et al. Aug 2009 A1
20090309887 Moller et al. Dec 2009 A1
20090309958 Hamagishi et al. Dec 2009 A1
20100007582 Zalewski Jan 2010 A1
20100026795 Moller et al. Feb 2010 A1
20100026797 Meuwissen et al. Feb 2010 A1
20100033813 Rogoff Feb 2010 A1
20100080448 Tam Apr 2010 A1
20100097687 Lipovetskaya et al. Apr 2010 A1
20100110316 Huang et al. May 2010 A1
20100165081 Jung et al. Jul 2010 A1
20100245548 Sasaki et al. Sep 2010 A1
20100309290 Myers Dec 2010 A1
20110000971 Onderko Jan 2011 A1
20110013258 Lee et al. Jan 2011 A1
20110032483 Hruska et al. Feb 2011 A1
20110074773 Jung Mar 2011 A1
20110085094 Kao et al. Apr 2011 A1
20110109629 Ericson et al. May 2011 A1
20110149030 Kang et al. Jun 2011 A1
20110188773 Wei Aug 2011 A1
20110210964 Chiu et al. Sep 2011 A1
20110234605 Smith et al. Sep 2011 A1
20110246877 Kwak et al. Oct 2011 A1
20110249026 Singh Oct 2011 A1
20110254929 Yang et al. Oct 2011 A1
20110291945 Ewing, Jr. et al. Dec 2011 A1
20110316679 Pihlaja Dec 2011 A1
20120013606 Tsai et al. Jan 2012 A1
20120019883 Chae et al. Jan 2012 A1
20120026586 Chen Feb 2012 A1
20120050262 Kim et al. Mar 2012 A1
20120057006 Joseph et al. Mar 2012 A1
20120057229 Kikuchi et al. Mar 2012 A1
20120062549 Woo et al. Mar 2012 A1
20120069019 Richards Mar 2012 A1
20120069146 Lee Mar 2012 A1
20120081359 Lee et al. Apr 2012 A1
20120102436 Nurmi Apr 2012 A1
20120113018 Yan May 2012 A1
20120120063 Ozaki May 2012 A1
20120154559 Voss et al. Jun 2012 A1
20120202187 Brinkerhoff, III Aug 2012 A1
20120206484 Hauschild et al. Aug 2012 A1
20120223879 Winter Sep 2012 A1
20120229450 Kim et al. Sep 2012 A1
20120229718 Huang et al. Sep 2012 A1
20120249836 Ali Oct 2012 A1
20120262398 Kim et al. Oct 2012 A1
20120274626 Hsieh Nov 2012 A1
20120274634 Yamada et al. Nov 2012 A1
20120281906 Appia Nov 2012 A1
20130027390 Kim et al. Jan 2013 A1
20130038611 Noritake et al. Feb 2013 A1
20130202221 Tsai Aug 2013 A1
20140035902 An et al. Feb 2014 A1
20140036173 Chang Feb 2014 A1
20140132726 Jung May 2014 A1
20140192172 Kang et al. Jul 2014 A1
20140355302 Wilcox et al. Dec 2014 A1
20150070481 S. et al. Mar 2015 A1
20150185957 Weng et al. Jul 2015 A1
20150226972 Wang Aug 2015 A1
20150260999 Wang et al. Sep 2015 A1
20150341616 Siegel et al. Nov 2015 A1
Foreign Referenced Citations (3)
Number Date Country
2012068532 May 2012 WO
2013109252 Jul 2013 WO
2015026017 Feb 2015 WO
Non-Patent Literature Citations (1)
Entry
International Search Report and Written Opinion, PCT International Patent Application No. PCT/US2016/061313, Craig Peterson, dated Jan. 19, 2017, 22 pages.
Related Publications (1)
Number Date Country
20150116457 A1 Apr 2015 US
Provisional Applications (1)
Number Date Country
61897106 Oct 2013 US