Video display method

Information

  • Patent Grant
  • 9792363
  • Patent Number
    9,792,363
  • Date Filed
    Tuesday, February 1, 2011
    13 years ago
  • Date Issued
    Tuesday, October 17, 2017
    6 years ago
Abstract
A method for video playback uses only resources universally supported by a browser (“inline playback”) operating in virtually all handheld media devices. In one case, the method first prepares a video sequence for display by a browser by (a) dividing the video sequence into a silent video stream and an audio stream; (b) extracting from the silent video stream a number of still images, the number of still images corresponding to at least one of a desired output frame rate and a desired output resolution; and (c) combining the still images into a composite image. In one embodiment, the composite image having a number of rows, with each row being formed by the still images created from a fixed duration of the silent video stream. Another method plays the still images of the composite image as a video sequence by (a) loading the composite image to be displayed through a viewport defined the size of one of the still images; (b) selecting one of the still images of the composite image; (c) setting the viewport to display the selected still image; and (d) setting a timer for a specified time period based on a frame rate, such that, upon expiration of the specified time period: (i) selecting a next one of the still images to be displayed in the viewport, unless all still images of the composite image have been selected; and (ii) return to step (c) if not all still images have been selected.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention


The present invention relates to handheld mobile devices. In particular, the present invention relates to playing video on a handheld mobile device.


2. Discussion of the Related Art


Video advertising is an important vehicle for promoting a product or a service on television, on-line devices or mobile devices. It is anticipated that video advertising will be equally important in the coming Internet-enabled television applications. However, current hand-held media devices (e.g., iPad, iPhone, Android devices) implement the HTML5 specification in so many different ways as to make it difficult for video advertising providers to provide a standard method for delivering video advertising. Specifically, because most hand-held media devices play video only in native mode, playing a video embedded in a web page is accomplished only by suspending the browser application and transferring control to a native media player. The file formats expected by the native media players vary among the handheld media devices. Therefore, it is difficult for a website owner to embed on a webpage a video clip that can be played by a large number of hand-held media devices. Further, the user is required to activate a link to start the native media player. Thus, support for video advertising in current handheld media devices is unsatisfactory.


SUMMARY

The present invention provides a method for video playback using only resources universally supported by a browser (“inline playback”) operating in virtually all handheld media devices. Such a video display method is particularly valuable because it enables video playback even on browsers with limited capabilities. Such a method is particularly beneficial to video advertising on mobile media devices, without requiring complex codecs.


According to one embodiment of the present invention, a method prepares a video sequence for display by a browser. The method includes (a) dividing the video sequence into a silent video stream and an audio stream; (b) extracting from the silent video stream a number of still images, the number of still images corresponding to at least one of a desired output frame rate and a desired output resolution; and (c) combining the still images into a composite image. In one embodiment, the composite image having a number of rows, with each row being formed by the still images created from a fixed duration of the silent video stream.


According to one embodiment of the present invention, the method for preparing a video sequence for display by a browser further comprises applying heuristic algorithms to the composite image to facilitate smooth video playback.


According to one embodiment of the present invention, the method for preparing a video sequence for display by a browser further comprises compressing the composite image, using one or more of JPEG- and Huffman-based compression algorithms.


According to one embodiment of the present invention, a method plays a composite image of the present invention, which includes a number of still images as a video sequence. The method comprises: (a) loading the composite image to be displayed through a viewport defined the size of one of the still images; (b) selecting one of the still images of the composite image; (c) setting the viewport to display the selected still image; and (d) setting a timer for a specified time period based on a frame rate, such that, upon expiration of the specified time period: (i) selecting a next one of the still images to be displayed in the viewport, unless all still images of the composite image have been selected; and (ii) return to step (c) if not all still images have been selected.


According to one embodiment of the present invention, the method which plays the composite image as a video sequence has the still images arranged in a multiple-row array. In that embodiment, each row of the multiple-raw array is formed by still images obtained from a video sequence, the number of still images in each row being related to a fixed time duration.


According to one embodiment of the present invention, the method which plays the composite image as a video sequence synchronizes setting of the selected still image for display with an audio stream.


According to one embodiment of the present invention, the method which plays the composite image as a video sequence may be implemented as an add-on to a web browser. Such an add-on may be implemented by a script written in an industry standard scripting language, such as javascript.


In one embodiment of the present invention, the video sequence is played when a user of an internet-enabled device makes a selection from a web page displayed on a graphical display. The selection may be made by clicking on an icon, such as a mute button.


The methods of the present invention allow a video advertising vendor to offer a standard and scalable method to a website owner to display video advertising on any web browser, regardless of the native format adopted by the handheld media player. Consequently, a network is created to enable running video advertising across many handheld media devices.


The present invention is better understood upon consideration of the detailed description below and the accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows a composite image formed by still images extracted from four seconds of a video stream, at a frame rate of 15 frames per second, in accordance with one embodiment of the present invention.



FIG. 2 shows the composite image of FIG. 1, with all component still images “greyed out” except for one; FIG. 2 represents specifying a still image to be displayed in a viewport by a playback module implementing a method of the present invention.



FIG. 3 shows a mute button provided during the replay of video in accordance with one embodiment of the present invention.



FIG. 4 shows flow chart 400, which summarizes steps 1-6 of a transcoder of the present invention described herein as steps 401-406.



FIG. 5 shows flow chart 500, which summarizes steps 1-6 of a playback module of the present invention described herein as steps 501-506.



FIG. 6 provides exemplary system 600, in accordance with one embodiment of the present invention.



FIG. 7 shows, in one embodiment, a video sequence being played within a viewport placed at the top of a web page where a “banner” is conventionally displayed.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

Although this detailed description is provided in the context of video advertising, the present invention is applicable to displaying any video format data. FIG. 6 provides exemplary system 600, in accordance with one embodiment of the present invention. As shown in FIG. 6, an advertising or campaign manager at an advertising company uploads a typical 16×9, or a 4×3 video sequence for use in advertising on to an advertising portal site 603. As known to those skilled in the art, a video sequence is a group of images (frames) to be played at a specified frame-rate; most video sequences also include a synchronized audio stream. A video sequence that is played on a mobile hand-held media device is typically played at 15 frames per second at a frame rate and a 480×272 (i.e., 16×9 aspect ratio) resolution, and 192 kbps sound. A typical video sequence used for advertising may be 10-15 seconds long. The advertising or campaign manager may provide further information, such as site targeting and other targeting parameters (e.g., the type of websites to run the advertising, and how frequent it should be run).


Portal site 603 provides transcoder module 605 which re-encodes the uploaded video into a playback-friendly format (described below). Transcoder module 605 performs the following steps:

    • 1. Dividing the video sequence into a silent video stream and an audio stream;
    • 2. Separately storing the audio stream into a separate file;
    • 3. Extracting from the silent video stream a number of still images, the number of still images to provide is determined by the desired output frame-rate and the desired output resolution;
    • 4. Combining the still images into a composite image consisting of a number of rows, each row being formed by the still images created from a one-second segment of the silent video stream (FIG. 1 shows a composite image formed by still images extracted from four seconds of a video stream, at a frame rate of 15 frames per second, in accordance with one embodiment of the present invention)
    • 5. (optional) Applying heuristic algorithms to the composite image aimed at ensuring smooth video playback (e.g., compiling an index of the still images in the composite image, so as to facilitate calculating each still image's position during playback); and
    • 6. (optional) Compressing the composite image, using one or more JPEG- or Huffman-based compression algorithms (e.g., any suitable compression algorithm that takes advantage of the scenic similarity that exists between successive frames).


These steps are summarized as steps 401-406 in flow chart 400 of FIG. 4.


Having thus processed, the composite image is made accessible to advertising servers (e.g., advertising server 602), which serves advertising content to websites (e.g., publisher sites 601) conforming to the targeting parameters. For example, when a user (e.g., client 604) requests a web page from publisher server 601, publisher server 601 embeds in the requested web page, code for requesting an advertising company's server (e.g., advertising server 602) for advertising content. Upon receiving the web page, client 604 sends the embedded request to advertising server 602, which selects one or more composite images from its collection of composite images based on the targeting parameters. Advertising server 602 then sends the selected composite images to client 604 with specific playback instructions. In one application (“VDO”, indicating playback of video according to a method of the present invention), the playback instructions include frame-size (video resolution), frame-rate, the universal resource locators (URLs) of the composite images, and the display-size. In the example of FIG. 6, communication between various parties take place over the interne. In some embodiments, some communication may take place over one or more private networks. Also, variations and modification of the protocols (e.g., interactions between advertising server 602 and publisher server 601) discussed above are possible.


In one embodiment, transcoder 605 may detect motion in the video sequence and provide a composite image that is to be played back using multiple frame rates (i.e., a less frame rate for relatively still scenes, and a higher frame rate where motion is detected.) In that embodiment, the frame rate information is provided to the playback module along with the composite image.


By dividing a given video sequence into video and audio streams (synchronizing the two streams during playback on the client side) to be separately processed, allows a playback module (described below) to be simply implemented using a scripting language, e.g. Javascript code. This simple approach reduces the overall integration time for incorporating video advertising into a web page to mere minutes, instead of months. Using a frame-by-frame approach, the transcoding module of the present invention therefore takes advantage of a frame-based delivery approach, which allows keeping related frames together to achieve high compression.


A playback module written in a suitable script language (e.g., a script language supported by most (if not all) browsers, such as javascript codes) may be provided to the user's browser.


The playback module processes the composite images produced by the transcoder module. The playback module also manages buffering to deliver a continuous video experience using the component still images in the composite images. The playback module performs the following steps:

    • 1. Loading the first composite image with a viewport defined by the frame-size, such that at any given time, only a still image worth of the composite image is displayed in the viewport;
    • 2. setting the portion of the composite image to be displayed in the viewport to the first still image of the composite image (e.g., the upper left still image, with an index value of 0);
    • 3. setting a timer to instruct the browser to call back the playback module after a specified time, the specified time being the time interval defined by the frame-rate;
    • 4. (entry point) setting the portion of the composite image to be displayed in the viewport, if available, to the next still image of the composite image (e.g., the still image in the composite image to the right of the one currently being displayed, or the image having the next greater index);
    • 5. (optional) Synchronizing the next still image, if available, to be displayed with the audio stream; and
    • 6. If the next still image is available, return to step 3 above.


These steps are summarized in FIG. 5 by flow chart 500, showing steps 1-6 of the playback module described herein as steps 501-506.


These sequence of steps cause the browser to display the specified still images to be displayed one by one, with each successive still image being displayed for the time period defined by the frame rate, thus achieving a video effect within the browser without invoking the native player on the handheld media device. In one embodiment, where the video is to be played using variable-rate playback, for each still image, the timer is set to its corresponding frame rate, which may change from still image to still image.



FIG. 2 shows the composite image of FIG. 1, with all component still images “greyed out” except for one; FIG. 2 represents specifying a still image to be displayed in a viewport by a playback module implementing a method of the present invention. Although this example shows the component still images of the composite image being arranged in a rectangular array in an implicit order, such an arrangement is not necessary. For example, the component still images in the composite image may be arranged in any manner. In one embodiment, indexing information may be provided using a separate file, or through javascript.


The composite image may be composed from or more layers of images. The layers are drawn in proper order to form the final image which is shown to user as a frame. Such an approach enhances data compression and further reduces the total amount of image data transferred. For example, each frame may be decomposed into a portion identical to another frame (the “base frame”) and one or more component layers each representing an incremental change in scene from the base frame. Typically the incremental change in scene may be provided using much fewer bits than the base frame, which needed to be sent only once for the many frames that may depend from the base frame.


The method above can also be implemented using the “Canvas” element under the HTML5 standard, although “Canvas” is not as supported on all mobile browsers. In that implementation, a still image to be displayed in the viewport is rendered into a bitmap (“drawn”) and copied into the canvas using appropriate application program interface (API) calls every time the timer expires. In some embodiments, the bitmap is a data structure at a specified location in memory.


Optionally, the playback module can implement a mute button (FIG. 3), so that the user may elect to play the video without the audio stream. In one embodiment, the mute button allows the user to choose playing the original video with sound in fullscreen (as supported by most devices). The mute button provides a click context that enables the playback module to play the video automatically thereby ensuring a desirable browsing experience.


In addition, the viewport for displaying the video sequence according to the present invention may be placed anywhere on the display of the media player. In one embodiment, as shown in FIG. 7, the video sequence can be shown within a viewport placed at the top of a web page where a “banner” is conventionally displayed.


An exemplary playback module written in javascript is provided below to illustrate the techniques described above:














<html>


<head>


<meta name=“viewport” content=“width=device-width; initial-scale=1.0;


maximum-scale=1.0; user-scalable=0;”/>


<title>iVdopia WAP ads demo</title>


<link rel=“canonical” href=“http://www.ivdopia.com”>


</head>


<body>


<div id=‘video’ style=‘background-image: url(final.0.png); background-


position:0px 0px; width:320; height:180;’ ><a


onclick=‘document.getElementById(“myAudio”).play( );’><img


src=‘muteicon.png’/></a><audio id=“myAudio” src=“audio.mp3”


muted=true></audio></img></div>


<!-div id=‘log’></div-->


<script>


var framerate=12;


var totalImages=16;


var totalFrames=2*totalImages*framerate;


var runUpto=totalFrames;


var width=320;


var height=180;


var currentFrame=0;


var totallines=Math.ceil(totalFrames/framerate);


var linesPerFile=Math.ceil(totallines/totalImages);


var framesPerFile=Math.ceil(totalFrames/totalImages);


var currFile=0;


var video=document.getElementById(‘video’);


var log=document.getElementById(‘log’);


var debugLog=function(str) {


if(log) log.innerText+=str+“\n”;


}


debugLog(totallines+“ ” + linesPerFile + “ ” + totalFrames );


var precache=function( ) {


for(i=1;i<totalImages;i++) {


var image=document.createElement(‘img’);


image.src=‘final.’+i+‘.png’;


debugLog(image.src);


}


}


var videoPlayback=undefined;


var autoPlayVideo=function( ) {


videoPlayback=setInterval(function( ) {


currentFrame=currentFrame+1;


if(currentFrame>=runUpto) {


clearInterval(videoPlayback);


videoPlayback=undefined;


return;


}


showFrame(currentFrame);


}, 1000/framerate);


}


var showFrame=function(currFrame) {


var currFile=Math.floor((currFrame−1)/framesPerFile);


currX=width*((currFrame−1)%framerate);


currY=height*Math.floor(((currFrame−1)/framerate)−


(linesPerFile*currFile));


debugLog(currFrame+“ ” + currX+ “ ” + currY + “ ” +currFile +


“ ” +linesPerFile);


video.style[‘background-position’]=‘-’+currX+‘px −’+currY+‘px’;


if(video.style[‘background-image’]!=‘url(final.‘+currFile+’.png)’) {


video.style[‘background-image’]=‘url(final.‘+currFile+’.png)’;


}


currentFrame=currFrame;


}


precache( );


var audio=document.getElementById(“myAudio”);


autoPlayVideo( );


var setCurrentTime=undefined;


audio.addEventListener(‘timeupdate’,function( ) {


debugLog(“CurrentTime ”+this.currentTime);


/*if(setCurrentTime==undefined) {


setCurrentTime=true;


this.currentTime=currentFrame/framerate;


}*/


if(videoPlayback==undefined) {


autoPlayVideo( );


}


showFrame(Math.floor(this.currentTime*framerate));


});


</script>


</body>


</html>









The present invention is applicable to all mobile or on-line websites, applications and other video content (e.g., television). The handheld devices that can benefit from the present invention includes all possible electronic devices capable of connecting to the internet and displaying advertising, such as Personal Computer, Notebook, iPad, iPod, iPhone, Android Devices, BlackBerry Devices, Television, and Internet Enabled Television.


The above detailed description is provided to illustrate specific embodiments of the present invention and is not intended to be limiting. Numerous variations and modifications within the scope of the present invention are possible. The present invention is set forth in the accompanying claims.

Claims
  • 1. A method, comprising: receiving, by an advertising server, a request from a user device including an identification of content located on the advertising server, the content including a video sequence;dividing the video sequence into a silent video stream and an audio stream;extracting from the silent video stream a plurality of still images, the plurality of still images being extracted from the video sequence according to one or more frame rates;combining the plurality of still images into a composite image for display by a web browser, the composite image including the plurality of still images in a multiple row and multiple column array;sending the composite image, the audio stream, and playback instructions as separate files to the web browser on the user device in response to the request, the playback instructions including instructions written in a scripting language to cause the web browser, using only resources supported by the web browser and without invoking a native media player on the user device, to: create a viewport to display a first still image of the plurality of still images, the viewport being embedded in a web page, wherein dimensions of the viewport are equal to dimensions of the first still image;display the first still image in the viewport by copying the first still image as a bitmap onto a canvas;start a timer with a duration based on a frame rate between the first still image and a second still image of the plurality of still images to follow the first still image;in response to the timer timing out, automatically display the second still image in the viewport by copying the second still image as a bitmap onto the canvas;repeat starting of the timer with the duration based on the one or more frame rates and displaying of the plurality of still images including displaying each of the plurality of still images in the viewport in sequence; andsynchronize and play the audio stream concurrently with the displaying of each of the plurality of still images;wherein each complete row of the composite image is created from a fixed duration of the silent video stream.
  • 2. The method of claim 1, further comprising applying heuristic algorithms to the composite image to facilitate smooth video playback.
  • 3. The method of claim 2, wherein one heuristic algorithm compiles an index for the still images in the composite image.
  • 4. The method of claim 3, wherein the index is used to calculate a position of each of the plurality of still images within the composite image.
  • 5. The method of claim 1, wherein display of each of the plurality of still images in the viewport in sequence is performed according to an index attached to the composite image, the index indicating a predetermined order in which to display the still images.
  • 6. The method as in claim 5, wherein the index is provided separately from the composite image.
  • 7. The method of claim 1, further comprising compressing the composite image.
  • 8. The method of claim 7, wherein compressing the composite image includes using one or more of JPEG and Huffman-based compression algorithms.
  • 9. The method of claim 1, wherein extracting the plurality of still images comprises decomposing each of the plurality of still images into one or more component layers.
  • 10. The method of claim 9, wherein decomposing each still image of the plurality of still images comprises identifying a base still image from a subset of the plurality of still images, and encoding other still images in the subset each as a combination of component layers representing the base still image and one or more incremental changes from the base still image.
  • 11. The method of claim 1, further comprising separately storing the audio stream into a separate file.
  • 12. The method of claim 1, wherein the user device comprises one of: a personal computer, a notebook computer, a tablet computer, an personal digital assistance, a media player, a mobile telephone, and an internet-enabled television.
  • 13. The method of claim 1, wherein sending the composite image, the audio stream, and playback instructions further comprises sending a playback module.
  • 14. The method of claim 1, wherein the one or more frame rates are based on a level of motion in the content.
  • 15. A server, comprising: an interface for receiving into the server a video sequence for processing; anda processor to: receive a request from a user device including an identification of content located on an advertising server, the content including a video sequence;divide the video sequence into a silent video stream and an audio stream;extract from the silent video stream a plurality of still images, the plurality of still images being extracted from the video sequence according to one or more frame rates; andcombine the plurality of still images into a composite image for display by a web browser, the composite image including the plurality of still images arranged in a multiple row and multiple column array; andsend the composite image, the audio stream, and playback instructions as separate files to the web browser on the user device in response to the request, the playback instructions including instructions written in a scripting language to cause the web browser, using only resources supported by the web browser and without invoking a native media player on the user device, to: create a viewport to display a first still image of the plurality of still images, the viewport being embedded in a web page, wherein dimensions of the viewport are equal to dimensions of the first still image;display the first still image in the viewport by copying the first still image as a bitmap onto a canvas;start a timer with a duration based on a frame rate between the first still image and a second still image of the plurality of still images to follow the first still image;in response to the timer timing out, automatically display the second still image in the viewport by copying the second still image as a bitmap onto the canvas; repeat starting of the timer with the duration based on the one or more frame rates and displaying of the plurality of still images including displaying each of the plurality of still images in the viewport in sequence; andsynchronize and play the audio stream concurrently with the displaying of each of the plurality of still images; wherein each complete row of the composite image is created from a fixed duration of the silent video stream.
  • 16. The server of claim 15, wherein the video sequence comprises a video for advertising.
  • 17. The server of claim 16, wherein the video sequence is received from an advertising server.
  • 18. The server of claim 17, wherein the processor further, when the user device requests the web page from a publisher server, causes the web browser to request the composite image from the advertising server.
  • 19. The server of claim 16, wherein the playback instructions include instructions to display each of the plurality of still images of the composite image at the one or more frame rates in the web browser to provide a video experience to the user device.
  • 20. The server of claim 15, wherein the processor further applies heuristic algorithms to the composite image to facilitate smooth video playback.
  • 21. The server of claim 20, wherein one heuristic algorithm compiles an index for the still images in the composite image.
  • 22. The server of claim 21, wherein the index is used to calculate a position of each of the plurality of still images within the composite image.
  • 23. The server of claim 15, wherein the displaying of each of the plurality of still images in the viewport in sequence is performed according to an index attached to the composite image, the index indicating a predetermined order in which to display the still images.
  • 24. The server as in claim 23, wherein the index is provided separately from the composite image.
  • 25. The server of claim 15, wherein the processor further compresses the composite image.
  • 26. The server of claim 25, wherein the processor further compresses the composite image by using one or more of JPEG and Huffman-based compression algorithms.
  • 27. The server of claim 15, wherein the processor further extracts the plurality of still images by decomposing each still image into one or more component layers.
  • 28. The server of claim 27, wherein decomposing each of the plurality of still images comprises identifying a base still image from a subset of the plurality of still images, and encoding other still images in the subset each as a combination of component layers representing the base still image and one or more incremental change from the base still image.
  • 29. The server of claim 15, further comprising separately storing the audio stream into a separate file.
  • 30. The server of claim 15, wherein the processor further receives and sends over a network.
  • 31. The server of claim 30, wherein the user device comprises one of: a personal computer, a notebook computer, a tablet computer, an personal digital assistance, a media player, a mobile telephone, and an internet-enabled television.
  • 32. The server of claim 15, wherein the one or more frame rates are based on a level of motion in the content.
US Referenced Citations (89)
Number Name Date Kind
5459830 Ohba et al. Oct 1995 A
5714997 Anderson Feb 1998 A
5818439 Nagasaka Oct 1998 A
6081551 Etoh Jun 2000 A
6185363 Dimitrova et al. Feb 2001 B1
6230162 Kumar et al. May 2001 B1
6304174 Smith et al. Oct 2001 B1
6337883 Tanaka Jan 2002 B1
6342904 Vasudevan Jan 2002 B1
6356921 Kumar et al. Mar 2002 B1
6546146 Hollinger Apr 2003 B1
6549215 Jouppi Apr 2003 B2
6654541 Nishi et al. Nov 2003 B1
6750919 Rosser Jun 2004 B1
6833865 Fuller et al. Dec 2004 B1
6856997 Lee et al. Feb 2005 B2
6870573 Yeo et al. Mar 2005 B2
6937273 Loui Aug 2005 B1
6956573 Bergen Oct 2005 B1
7254824 Gordon et al. Aug 2007 B1
7313808 Gupta et al. Dec 2007 B1
7401351 Boreczky et al. Jul 2008 B2
7480442 Girgensohn et al. Jan 2009 B2
7487524 Miyamori Feb 2009 B2
7594177 Jojic Sep 2009 B2
7624337 Sull et al. Nov 2009 B2
7673321 Yurt Mar 2010 B2
7779438 Davies Aug 2010 B2
7810049 Werwath et al. Oct 2010 B2
7836475 Angiolillo et al. Nov 2010 B2
7853898 Clark et al. Dec 2010 B2
7996791 Rashkovskiy Aug 2011 B2
7996878 Basso et al. Aug 2011 B1
8023769 Bang et al. Sep 2011 B2
8079054 Dhawan et al. Dec 2011 B1
8117564 Woods et al. Feb 2012 B2
8223849 Lu et al. Jul 2012 B2
8307395 Issa et al. Nov 2012 B2
8479238 Chen et al. Jul 2013 B2
8578273 MacKenzie Nov 2013 B2
8756505 Gonze et al. Jun 2014 B2
20010005400 Tsujii et al. Jun 2001 A1
20010043794 Akiba et al. Nov 2001 A1
20020144262 Plotnick et al. Oct 2002 A1
20020167607 Eerenberg et al. Nov 2002 A1
20030103670 Schoelkopf Jun 2003 A1
20030156757 Murakawa Aug 2003 A1
20040019608 Obrador Jan 2004 A1
20040022453 Kusama et al. Feb 2004 A1
20040042766 Mizushiri Mar 2004 A1
20040103446 Yagi May 2004 A1
20040150723 Seo et al. Aug 2004 A1
20040197088 Ferman et al. Oct 2004 A1
20040227768 Bates Nov 2004 A1
20050028194 Elenbaas Feb 2005 A1
20050044499 Allen Feb 2005 A1
20050062888 Wood et al. Mar 2005 A1
20050068335 Tretter et al. Mar 2005 A1
20050281535 Fu et al. Dec 2005 A1
20070019889 Miller Jan 2007 A1
20070024705 Richter et al. Feb 2007 A1
20070058937 Ando et al. Mar 2007 A1
20070067305 Ives Mar 2007 A1
20070147706 Sasaki et al. Jun 2007 A1
20070154164 Liu Jul 2007 A1
20070168864 Yamamoto et al. Jul 2007 A1
20070203942 Hua et al. Aug 2007 A1
20070250901 McIntire et al. Oct 2007 A1
20070268406 Bennett Nov 2007 A1
20070274672 Itoi Nov 2007 A1
20080094419 Leigh et al. Apr 2008 A1
20080094487 Tojima Apr 2008 A1
20080097970 Olstad et al. Apr 2008 A1
20080154941 Park Jun 2008 A1
20090021592 Oyama et al. Jan 2009 A1
20090089056 Fujii Apr 2009 A1
20090115901 Winter May 2009 A1
20090119736 Perlman et al. May 2009 A1
20090158326 Hunt et al. Jun 2009 A1
20100091184 Nitta Apr 2010 A1
20100158109 Dahlby et al. Jun 2010 A1
20100192106 Watanabe et al. Jul 2010 A1
20100262280 Miller et al. Oct 2010 A1
20110008017 Gausereide Jan 2011 A1
20110033113 Sakaguchi Feb 2011 A1
20110058788 Asao et al. Mar 2011 A1
20110087794 Li Apr 2011 A1
20120057802 Yuki et al. Mar 2012 A1
20120311649 Patten Dec 2012 A1
Foreign Referenced Citations (2)
Number Date Country
2005096625 Oct 2005 KR
2005096625 Oct 2005 KR
Related Publications (1)
Number Date Country
20120194734 A1 Aug 2012 US