Method of making a digital camera image of a scene including the camera user

Information

  • Patent Grant
  • 7855737
  • Patent Number
    7,855,737
  • Date Filed
    Wednesday, March 26, 2008
    16 years ago
  • Date Issued
    Tuesday, December 21, 2010
    14 years ago
Abstract
A method of making an image in a digital camera comprises capturing a digital image of a scene into which the camera user is to be inserted, and superimposing a symbol (subject locator) onto the scene image representing at least a part of a human subject. The subject locator is scaled to a desired size and moved to a desired position relative to the scene image. Next a digital image of the user is captured, and at least the part of the user image represented by the subject locator is extracted. The part of the user image represented by the subject locator is scaled (before or after extraction) to substantially the same size as the subject locator and inserted into the first image at the position of the subject locator.
Description

The present invention relates to a method of making a digital camera image of a scene including the camera user.


BACKGROUND OF THE INVENTION

A disadvantage with conventional digital cameras is that the camera user, i.e. the photographer, is located on the opposite side of the camera to the scene being photographed, so that he is automatically excluded from the scene. Self-timers which set a delay between pressing the shutter button and releasing the shutter allow the user to move round to the front of the camera in time to appear in the scene. However, the user has to position himself in the scene by guesswork and has no accurate control as to his position or size in the scene.


US Patent Application Publication No. US 2006/0125928 discloses a digital camera having forward and rear facing lenses, so that an image of the user can be taken at the same time as the image of the scene. The image of the user is then “associated” with the image of the scene. However, such association does not provide a natural integration of the user into the scene.


SUMMARY OF THE INVENTION

In a first embodiment, a method of making an image in a digital camera is provided, comprising capturing a digital image of a scene into which the camera user is to be inserted, and superimposing a symbol (subject locator) onto the scene image representing at least a part of a human subject. The subject locator is scaled to a desired size and moved to a desired position relative to the scene image. Next a digital image of the user is captured, and at least the part of the user image represented by the subject locator is extracted. The part of the user image represented by the subject locator is scaled (before or after extraction) to substantially the same size as the subject locator and inserted into the first image at the position of the subject locator.


In a second embodiment, a further method of making an image in a digital camera is provided, comprising displaying a preview image of a scene into which the camera user is to be inserted, and superimposing the subject locator on the preview image. The subject locator is scaled to a desired size and moved to a desired position relative to the edges of the preview image. The camera user is detected entering the scene displayed by the preview image, and the preview image is scaled and panned to bring the part of the preview image represented by the subject locator to substantially the same size and position as the subject locator. Finally, a digital image of the scene is captured.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the invention will now be described by way of example with reference to the accompanying drawings, in which:



FIG. 1 is a block diagram of a digital camera operating in accordance with an embodiment of the present invention.



FIG. 2 is a flow diagram of the steps performed by software in the camera of FIG. 1 in a first embodiment of the invention.



FIGS. 3.1 to 3.4 are schematic diagrams illustrating the operation of the first embodiment.



FIG. 4 is a flow diagram of the steps performed by software in the camera of FIG. 1 in a second embodiment of the invention.



FIGS. 5.1 to 5.3 are schematic diagrams illustrating the operation of the second embodiment.





DESCRIPTION OF THE PREFERRED EMBODIMENTS

In the present specification, the term “image” refers to image data and, except where the context requires, does not necessarily imply that an actual viewable image is present at any particular stage of the processing.



FIG. 1 is a block diagram of a digital image acquisition device 20 which may be a portable digital camera per se or a digital camera incorporated into a cell phone (in the latter case only the camera components of the phone are shown). The device includes a processor 120. It can be appreciated that many of the processes implemented in the digital camera may be implemented in or controlled by software operating in a microprocessor, central processing unit, controller, digital signal processor and/or an application specific integrated circuit, collectively depicted as processor 120. Generically, all user interface and control of peripheral components such as buttons and display is controlled by a microcontroller 122. The processor 120, in response to a user input at 122, such as half pressing a shutter button (pre-capture mode 32), initiates and controls the digital photographic process. Ambient light exposure is monitored using light sensor 40 in order to automatically determine if a flash is to be used. A distance to the subject is determined using a focus component 50, which controls a zoomable main lens system 62 on the front of the camera to focus an image of an external scene onto an image capture component 60 within the camera. If a flash is to be used, processor 120 causes the flash 70 to generate a photographic flash in substantial coincidence with the recording of the image by image capture component 60 upon full depression of the shutter button. The image capture component 60 digitally records the image in colour. The image capture component preferably includes a CCD (charge coupled device) or CMOS to facilitate digital recording. The flash may be selectively generated either in response to the light sensor 40 or a manual input 72 from the user of the camera. The high resolution image recorded by image capture component 60 is stored in an image store 80 which may comprise computer memory such a dynamic random access memory or a non-volatile memory. The camera is equipped with a display screen 100, such as an LCD, for preview and post-view of images.


In the case of preview images which are generated in the pre-capture mode 32 with the shutter button half-pressed, the display 100 can assist the user in composing the image, as well as being used to determine focusing and exposure. Temporary storage 82 is used to store one or more of the preview images and can be part of the image store 80 or a separate component. The preview image is preferably generated by the image capture component 60. For speed and memory efficiency reasons, preview images preferably have a lower pixel resolution than the main image taken when the shutter button is fully depressed, and are generated by sub-sampling a raw captured image using software 124 which can be part of the general processor 120 or dedicated hardware or combination thereof. Depending on the settings of this hardware subsystem, the pre-acquisition image processing may satisfy some predetermined test criteria prior to storing a preview image. Such test criteria may be chronological, such as to constantly replace the previous saved preview image with a new captured preview image every 0.5 seconds during the pre-capture mode 32, until the high resolution main image is captured by full depression of the shutter button. More sophisticated criteria may involve analysis of the preview image content, for example, testing the image for changes, before deciding whether the new preview image should replace a previously saved image. Other criteria may be based on image analysis such as the sharpness, or metadata analysis such as the exposure condition, whether a flash is going to happen, and/or the distance to the subject.


If test criteria are not met, the camera continues by capturing the next preview image without saving the current one. The process continues until the final high resolution main image is acquired and saved by fully depressing the shutter button.


Where multiple preview images can be saved, a new preview image will be placed on a chronological First In First Out (FIFO) stack, until the user takes the final picture. The reason for storing multiple preview images is that the last preview image, or any single preview image, may not be the best reference image for comparison with the final high resolution image in, for example, a red-eye correction process or, in the present embodiment, mid-shot mode processing. By storing multiple images, a better reference image can be achieved, and a closer alignment between the preview and the final captured image can be achieved in an alignment stage discussed later.


The camera is also able to capture and store in the temporary storage 82 one or more low resolution post-view images. Post-view images are low resolution images essentially the same as preview images, except that they occur after the main high resolution image is captured.


In addition to the zoomable main lens system 62, the camera includes a zoomable subsidiary lens system 66 and corresponding image capture component 68. In a cell phone the subsidiary lens system 66 normally faces rearwardly towards a user holding the phone, that is, in the opposite direction to the forwardly facing front lens system 62. This allows the user to enter into a video phone call with a remote party while holding the phone in a natural manner. The components allowing video calling are not relevant to the present invention and are not shown. The subsidiary lens system 66 may be focusable, using a focus component 64, or have a fixed focus in which case the focus component 64 would be omitted. A user input 84 allows the user to select either one of the lens systems for use, the same processing circuitry, as shown in FIG. 1, being used for both except that in this embodiment a rearward-facing flash, corresponding to the forward-facing flash 70, is omitted.


The camera includes a “User Composite Mode” which can be selected by a user input 30 at any time that a user wishes to be inserted into a scene imaged by the front lens system 62 and currently previewed on the camera display screen 100. FIG. 2 is a flow diagram of the steps performed by software in the camera of FIG. 1 when User Composite Mode is selected in a first embodiment of the invention. Where a user input is required for any particular step, the existing camera controls may be programmed for this purpose.

  • Step 200: In response to full depression of the shutter button, a first still image 300 (FIG. 3.1) of the scene imaged by the front lens 62 on the component 60 is captured. The first image 300 is displayed on the screen 100.
  • Step 202: Foreground/background separation on the image 300 is optionally performed using techniques described in, for example, International Patent Application No.'s. PCT/EP2006/008229 (FN119) and PCT/EP2006/005109 (FN122). The separation data is stored for use in step 208.
  • Step 204: In response to user input, a subject locator 302 (FIG. 3.2) is generated and superimposed on the displayed image 300. The subject locator 302 is a symbol representing all or part of a human subject. In the present case the subject locator is a simplified outline of the head and body of a human subject. The subject locator may be available in several different profiles corresponding to, e.g., head and shoulders, mid-shot or full length, in which case the user selects the desired one. The subject locator 302 shown in FIG. 3.2 is assumed to be a full length profile.
  • Step 206: In response to user input, the subject locator 302 is shifted relative to the image frame defined by the edges 303 of the display screen 100 to place the subject locator at a desired position relative to the still image 300. The subject locator may also be zoomed (i.e. scaled up or down) to a desired size relative to the image frame. A conventional four-way directional menu control may be used to shift the subject locator, and a conventional manual zoom control may be used to zoom the subject locator, both controls being programmed in User Composite Mode for those purposes.
  • Step 208: If step 202 was performed, the user also selects, in a case where the subject locator 302 partly overlaps the foreground of the image 300, whether the user is to be inserted in front of or behind the foreground of the image 300.
  • Step 210: Once selections in step 208 are confirmed, the camera switches to preview mode of the image seen through the rear lens 66, i.e. an image of the user.
  • Step 212: In response to full depression of the shutter button, a second still image 304 (FIG. 3.3) of the user imaged by the rear lens 66 on the component 68 is captured. The second image 304 is displayed on the screen 100 for confirmation by the user. If not confirmed, one or more further images may be captured until the user is satisfied with the captured image 304.
  • Step 214: Upon confirmation, the software performs face detection and/or foreground/background separation on the second image 304 to locate the user's face and body 306, or as much as is captured in the image 304. Face detection may use techniques described in, for example, International Patent Application No. PCT/EP2007/005330 (FN143), while foreground/background separation may use techniques as previously referred to.
  • Step 216: The software extracts the face and—depending on the profile of the selected subject locator—all or part of the user's body from the second image 304. For example, if the subject locator were a head and shoulders profile, the software would only extract the head and shoulders of the user. The software then scales the extracted image component up or down to substantially the same size as the subject locator. Alternatively, the scaling could be done by digitally zooming the entire second image 304 before extraction of the face and (part of the) body.
  • Step 218: Finally, the image component extracted in step 216 is inserted into the first image 300 at the position of the subject locator 302 to provide a composite image 308, FIG. 3.4, in which the inserted image component replaces the underlying original image data and the subject locator is removed. Known blending techniques may be used to smooth the transition between the inserted image component 306 and the original scene 300. If steps 202 and 208 were performed in a case where the subject locator 302 partly overlaps the foreground of the image 300, only that part of the extracted image component overlapping the background of the image 300 is inserted into the image 300. In a variation of this step the software could extract all of the face and body in step 216 and only insert the part corresponding to the selected subject locator profile in step 218 (e.g. head and shoulders).


Various modifications of the above embodiment are possible.


The first and second images 300, 304 need not be captured in the order stated; for example, steps 210 to 214 could be done before steps 200 to 208. If desired, bearing mind that in this embodiment the camera has both front and rear lens systems, the first and second images could be captured at substantially the same time. In another modification, one or both images 300, 304 could be pre-existing images, i.e. captured and stored before the user enters User Composite Mode. In that case, steps 200 and 212 would consist of selecting the relevant images from the stored images.


In a case where the camera is not a dual-lens camera, i.e. it has only a front-facing lens 62, the second image 304 could be captured through the front lens by allowing the user time to move round to the front of the camera or to turn the camera around to face the user. The second image could then either be captured using a timer; or if the camera has a secondary front facing display, through the user manually capturing the second image when they are satisfied with the image shown in the secondary display; or alternatively by automatically capturing a suitable image of the user fitting the profile as described for the second embodiment. Further alternatively, the second image 304 could be taken by a third party.


Furthermore, where the camera is provided with a speaker, the software could be arranged to produce audio directions via the speaker in order to guide the user to a desired location within the scene in order to improve or replace the scaling referred to in step 216. For example, the user could be instructed to move left, right, forward or backwards within the scene.


In another variation the scaling referred to in step 216 could be done before extraction by performing face detection and/or foreground/background separation on a preview of the second image 304 to locate the user's face and body 306, and then optically zoom the preview so that when the second image is 304 captured the face and body are already at the correct size for placement at the subject locator 302 in the image 300.


It is also to be noted that by placing the subject locator 302 in front of a person in the original scene 300, the user can replace that person in the scene. It is also possible, by having a subject locator profile corresponding just to a face, to replace a person's face while retaining their original clothing, etc.



FIG. 4 is a flow diagram of the steps performed by software in the camera of FIG. 1 when User Composite Mode is selected in a second embodiment of the invention. At the commencement of the process it is assumed that the camera is in preview mode and the display 100 is showing a preview image derived through the front lens system 62, i.e. a preview of a scene into which the user wishes to be inserted. Again, where a user input is required for any particular step, the existing camera controls may be programmed for this purpose.

  • Step 400: A face detection algorithm locates and tracks faces (if any) in the displayed preview image 500. In FIG. 5.1 face tracking is indicated by the brackets 502.
  • Step 402: In response to user input, a subject locator 504 is generated and superimposed on the displayed preview image 500. As before, the subject locator may be available in several different profiles, in which case the user selects the desired one.
  • Step 404: In response to user input, the subject locator 504 is shifted relative to the image frame defined by the edges 506 of the display screen 100 to place the subject locator at a desired position relative to the preview image 500. The subject locator may also be zoomed to a desired size relative to the image frame.
  • Step 406: User activates a self-timer button to allow the user to move round to front of camera and enter the scene.
  • Step 408: The software detects and tracks an (additional) face 508 entering the scene.
  • Step 410: When the software detects that the additional face 508 has substantially stopped moving, or at the expiration of a time period set by the self-timer button, the entire preview image is zoomed (optically and/or digitally) and panned (digitally) to bring the image 510 of the user (or relevant part as determined by the subject locator profile) to a position where it is superimposed on the subject locator 504 with a size substantially the same as that of the subject locator. Note that the position of the subject locator 504 is fixed relative to the edges 506 of the frame so that panning and zooming the preview image effectively moves the entire image relative to the subject locator.
  • Step 412: When the panning and zooming is complete, the subject locator 504 is removed and the scene imaged by the front lens 62 on the component 60 is captured.


In a variation of the above embodiment, where the camera is provided with a speaker, at step 410, the software is arranged to produce audio directions via the speaker in order to guide the user to a desired location within the scene. For example, referring to FIGS. 5.2 and 5.3, were the user to enter the scene from the left hand side, he may position himself to the left of the subjects already present in the preview image. In such a case and as a result of the zooming and panning of step 410, it is possible that the captured image may no longer display those subjects, and the preview image would not be substantially equal to the image captured. Thus, by guiding the user, for example, by instructing him to move to the right, an image substantially equal to that of the preview image can be captured.


The invention is not limited to the embodiment(s) described herein but can be amended or modified without departing from the scope of the present invention.

Claims
  • 1. A method of making an image in a digital camera, the method comprising the following steps, not necessarily in the order stated: (a) capturing a first digital image of a scene into which a camera user is to be inserted (“first image”),(b) before capturing a second digital image including the camera user, superimposing on the first image a subject locator that comprises a symbol (“subject locator”) representing at least a part of a human subject,(c) also before capturing the second digital image including the camera user, scaling the subject locator to a desired size and moving it to a desired position relative to the first image,(d) capturing the second digital image including of the camera user (“second image”),(e) extracting at least the part of the second image represented by the subject locator,(f) scaling the part of the second image represented by the subject locator to substantially the same size as the subject locator, and(g) inserting the scaled extracted part of the second image into the first image at the position of the subject locator.
  • 2. The method claimed in claim 1, in which the scaling step (f) is performed on the extracted part of the second image.
  • 3. The method claimed in claim 1, in which the scaling step (f) is performed on the second image prior to extraction of the part represented by the subject locator.
  • 4. The method claimed in claim 1, wherein the camera has two lens systems facing forwardly and rearwardly respectively of the camera, the first image being captured by the first lens system and the second image being captured by the second lens system.
  • 5. The method claimed in claim 1, wherein the first and second images are captured through the same lens system.
  • 6. The method claimed in claim 5, wherein the second image is captured using a self-timer.
  • 7. The method claimed in claim 1, in which the extracting step (e) is performed by at least one of face detection and foreground/background separation.
  • 8. A digital camera including an optical system for acquiring digital images and one or more processor-readable media having embodied therein processor-readable code for programming a one or more processors to perform the method claimed in claim 1.
  • 9. The digital camera claimed in claim 8, wherein the camera forms part of a cell phone.
  • 10. The method claimed in claim 1, further comprising separating foreground and background for the first image, and wherein the superimposing of the subject locator partly overlaps the foreground of the first image, and the method further comprises selecting whether the user is to be inserted in front of or behind the foreground of the first image.
  • 11. The method claimed in claim 1, further comprising selecting a profile of the subject locator and determining the scaled extracted part of the second image based on the profile of the subject locator.
  • 12. The method claimed in claim 11, wherein the profile of the subject locator comprises head and shoulders, mid-shot or full length.
  • 13. The digital camera claimed in claim 8, in which the scaling is performed on the extracted part of the second image.
  • 14. The digital camera claimed in claim 8, in which the scaling is performed on the second image prior to extraction of the part represented by the subject locator.
  • 15. The digital camera claimed in claim 8, wherein the camera has two lens systems facing forwardly and rearwardly respectively of the camera, the first image being captured by the first lens system and the second image being captured by the second lens system.
  • 16. The digital camera claimed in claim 8, wherein the first and second images are captured through the same lens system.
  • 17. The digital camera claimed in claim 16, wherein the second image is captured using a self-timer.
  • 18. The digital camera claimed in claim 8, in which the extracting is performed by at least one of face detection and foreground/background separation.
  • 19. The digital camera claimed in claim 8, wherein the method further comprises separating foreground and background for the first image, and wherein the superimposing of the subject locator partly overlaps the foreground of the first image, and the method further comprises selecting whether the user is to be inserted in front of or behind the foreground of the first image.
  • 20. The digital camera claimed in claim 8, wherein the method further comprises selecting a profile of the subject locator and determining the scaled extracted part of the second image based on the profile of the subject locator.
  • 21. The digital camera claimed in claim 20, wherein the profile of the subject locator comprises head and shoulders, mid-shot or full length.
  • 22. One or more non-transitory processor-readable media having embodied therein processor-readable code for programming a one or more processors to perform the method claimed in claim 1.
  • 23. The one or more non-transitory processor readable media claimed in claim 22, wherein the camera forms part of a cell phone.
  • 24. The one or more non-transitory processor readable media claimed in claim 22, in which the scaling is performed on the extracted part of the second image.
  • 25. The one or more non-transitory processor readable media claimed in claim 22, in which the scaling is performed on the second image prior to extraction of the part represented by the subject locator.
  • 26. The one or more non-transitory processor readable media claimed in claim 22, wherein the camera has two lens systems facing forwardly and rearwardly respectively of the camera, the first image being captured by the first lens system and the second image being captured by the second lens system.
  • 27. The one or more non-transitory processor readable media claimed in claim 22, wherein the first and second images are captured through the same lens system.
  • 28. The one or more non-transitory processor readable media claimed in claim 27, wherein the second image is captured using a self-timer.
  • 29. The one or more non-transitory processor readable media claimed in claim 22, in which the extracting is performed by at least one of face detection and foreground/background separation.
  • 30. The one or more non-transitory processor readable media claimed in claim 22, wherein the method further comprises separating foreground and background for the first image, and wherein the superimposing of the subject locator partly overlaps the foreground of the first image, and the method further comprises selecting whether the user is to be inserted in front of or behind the foreground of the first image.
  • 31. The one or more non-transitory processor readable media claimed in claim 22, wherein the method further comprises selecting a profile of the subject locator and determining the scaled extracted part of the second image based on the profile of the subject locator.
  • 32. The one or more non-transitory processor readable media claimed in claim 31, wherein the profile of the subject locator comprises head and shoulders, mid-shot or full length.
US Referenced Citations (343)
Number Name Date Kind
4047187 Mashimo et al. Sep 1977 A
4317991 Stauffer Mar 1982 A
4367027 Stauffer Jan 1983 A
RE031370 Mashimo et al. Sep 1983 E
4448510 Murakoshi May 1984 A
4456354 Mizokami Jun 1984 A
4638364 Hiramatsu Jan 1987 A
4690536 Nakai et al. Sep 1987 A
4796043 Izumi et al. Jan 1989 A
4970663 Bedell et al. Nov 1990 A
4970683 Harshaw et al. Nov 1990 A
4975969 Tal Dec 1990 A
5008946 Ando Apr 1991 A
5018017 Sasaki et al. May 1991 A
RE033682 Hiramatsu Sep 1991 E
5051770 Cornuejols Sep 1991 A
5063603 Burt Nov 1991 A
5111231 Tokunaga May 1992 A
5130935 Takiguchi Jul 1992 A
5150432 Ueno et al. Sep 1992 A
5161204 Hutcheson et al. Nov 1992 A
5164831 Kuchta et al. Nov 1992 A
5164992 Turk et al. Nov 1992 A
5227837 Terashita Jul 1993 A
5280530 Trew et al. Jan 1994 A
5291234 Shindo et al. Mar 1994 A
5305048 Suzuki et al. Apr 1994 A
5311240 Wheeler May 1994 A
5331544 Lu et al. Jul 1994 A
5353058 Takei Oct 1994 A
5384615 Hsieh et al. Jan 1995 A
5384912 Ogrinc et al. Jan 1995 A
5430809 Tomitaka Jul 1995 A
5432863 Benati et al. Jul 1995 A
5450504 Calia Sep 1995 A
5465308 Hutcheson et al. Nov 1995 A
5488429 Kojima et al. Jan 1996 A
5493409 Maeda et al. Feb 1996 A
5496106 Anderson Mar 1996 A
5576759 Kawamura et al. Nov 1996 A
5629752 Kinjo May 1997 A
5633678 Parulski et al. May 1997 A
5638136 Kojima et al. Jun 1997 A
5638139 Clatanoff et al. Jun 1997 A
5652669 Liedenbaum Jul 1997 A
5680481 Prasad et al. Oct 1997 A
5684509 Hatanaka et al. Nov 1997 A
5706362 Yabe Jan 1998 A
5710833 Moghaddam et al. Jan 1998 A
5715325 Bang et al. Feb 1998 A
5724456 Boyack et al. Mar 1998 A
5745668 Poggio et al. Apr 1998 A
5764803 Jacquin et al. Jun 1998 A
5771307 Lu et al. Jun 1998 A
5774129 Poggio et al. Jun 1998 A
5774591 Black et al. Jun 1998 A
5774747 Ishihara et al. Jun 1998 A
5774754 Ootsuka Jun 1998 A
5781650 Lobo et al. Jul 1998 A
5802208 Podilchuk et al. Sep 1998 A
5802220 Black et al. Sep 1998 A
5812193 Tomitaka et al. Sep 1998 A
5818975 Goodwin et al. Oct 1998 A
5835616 Lobo et al. Nov 1998 A
5842194 Arbuckle Nov 1998 A
5844573 Poggio et al. Dec 1998 A
5850470 Kung et al. Dec 1998 A
5852669 Eleftheriadis et al. Dec 1998 A
5852823 De Bonet Dec 1998 A
RE036041 Turk et al. Jan 1999 E
5870138 Smith et al. Feb 1999 A
5905807 Kado et al. May 1999 A
5911139 Jain et al. Jun 1999 A
5966549 Hara et al. Oct 1999 A
5978519 Bollman et al. Nov 1999 A
5991456 Rahman et al. Nov 1999 A
6028960 Graf et al. Feb 2000 A
6035074 Fujimoto et al. Mar 2000 A
6053268 Yamada Apr 2000 A
6061055 Marks May 2000 A
6072094 Karady et al. Jun 2000 A
6097470 Buhr et al. Aug 2000 A
6101271 Yamashita et al. Aug 2000 A
6108437 Lin Aug 2000 A
6128397 Baluja et al. Oct 2000 A
6128398 Kuperstein et al. Oct 2000 A
6134339 Luo Oct 2000 A
6148092 Qian Nov 2000 A
6151073 Steinberg et al. Nov 2000 A
6173068 Prokoski Jan 2001 B1
6181805 Koike et al. Jan 2001 B1
6188777 Darrell et al. Feb 2001 B1
6192149 Eschbach et al. Feb 2001 B1
6246779 Fukui et al. Jun 2001 B1
6246790 Huang et al. Jun 2001 B1
6249315 Holm Jun 2001 B1
6252976 Schildkraut et al. Jun 2001 B1
6263113 Abdel-Mottaleb et al. Jul 2001 B1
6268939 Klassen et al. Jul 2001 B1
6275614 Krishnamurthy et al. Aug 2001 B1
6278491 Wang et al. Aug 2001 B1
6282317 Luo et al. Aug 2001 B1
6301370 Steffens et al. Oct 2001 B1
6301440 Bolle et al. Oct 2001 B1
6332033 Qian Dec 2001 B1
6349373 Sitka et al. Feb 2002 B2
6351556 Loui et al. Feb 2002 B1
6393136 Amir et al. May 2002 B1
6393148 Bhaskar May 2002 B1
6400830 Christian et al. Jun 2002 B1
6404900 Qian et al. Jun 2002 B1
6407777 Deluca Jun 2002 B1
6421468 Ratnakar et al. Jul 2002 B1
6426779 Noguchi et al. Jul 2002 B1
6438234 Gisin et al. Aug 2002 B1
6438264 Gallagher et al. Aug 2002 B1
6456732 Kimbell et al. Sep 2002 B1
6459436 Kumada et al. Oct 2002 B1
6463163 Kresch Oct 2002 B1
6473199 Gilman et al. Oct 2002 B1
6501857 Gotsman et al. Dec 2002 B1
6502107 Nishida Dec 2002 B1
6504546 Cosatto et al. Jan 2003 B1
6504942 Hong et al. Jan 2003 B1
6504951 Luo et al. Jan 2003 B1
6516154 Parulski et al. Feb 2003 B1
6526161 Yan Feb 2003 B1
6529630 Kinjo Mar 2003 B1
6549641 Ishikawa et al. Apr 2003 B2
6556708 Christian et al. Apr 2003 B1
6564225 Brogliatti et al. May 2003 B1
6567983 Shiimori May 2003 B1
6587119 Anderson et al. Jul 2003 B1
6606398 Cooper Aug 2003 B2
6633655 Hong et al. Oct 2003 B1
6661907 Ho et al. Dec 2003 B2
6678407 Tajima Jan 2004 B1
6697503 Matsuo et al. Feb 2004 B2
6697504 Tsai Feb 2004 B2
6700999 Yang Mar 2004 B1
6747690 Molgaard Jun 2004 B2
6754368 Cohen Jun 2004 B1
6754389 Dimitrova et al. Jun 2004 B1
6760465 McVeigh et al. Jul 2004 B2
6760485 Gilman et al. Jul 2004 B1
6765612 Anderson et al. Jul 2004 B1
6778216 Lin Aug 2004 B1
6792135 Toyama Sep 2004 B1
6801250 Miyashita Oct 2004 B1
6816156 Sukeno et al. Nov 2004 B2
6816611 Hagiwara et al. Nov 2004 B1
6829009 Sugimoto Dec 2004 B2
6850274 Silverbrook et al. Feb 2005 B1
6876755 Taylor et al. Apr 2005 B1
6879705 Tao et al. Apr 2005 B1
6885760 Yamada et al. Apr 2005 B2
6900840 Schinner et al. May 2005 B1
6937773 Nozawa et al. Aug 2005 B1
6940545 Ray et al. Sep 2005 B1
6959109 Moustafa Oct 2005 B2
6965684 Chen et al. Nov 2005 B2
6977687 Suh Dec 2005 B1
6993157 Oue et al. Jan 2006 B1
7003135 Hsieh et al. Feb 2006 B2
7020337 Viola et al. Mar 2006 B2
7024053 Enomoto Apr 2006 B2
7027619 Pavlidis et al. Apr 2006 B2
7027621 Prokoski Apr 2006 B1
7034848 Sobol Apr 2006 B2
7035456 Lestideau Apr 2006 B2
7035462 White et al. Apr 2006 B2
7035467 Nicponski Apr 2006 B2
7038709 Verghese May 2006 B1
7038715 Flinchbaugh May 2006 B1
7039222 Simon et al. May 2006 B2
7042511 Lin May 2006 B2
7043465 Pirim May 2006 B2
7050607 Li et al. May 2006 B2
7057653 Kubo Jun 2006 B1
7064776 Sumi et al. Jun 2006 B2
7082212 Liu et al. Jul 2006 B2
7088386 Goto Aug 2006 B2
7099510 Jones et al. Aug 2006 B2
7106374 Bandera et al. Sep 2006 B1
7106887 Kinjo Sep 2006 B2
7110569 Brodsky et al. Sep 2006 B2
7110575 Chen et al. Sep 2006 B2
7113641 Eckes et al. Sep 2006 B1
7119838 Zanzucchi et al. Oct 2006 B2
7120279 Chen et al. Oct 2006 B2
7151843 Rui et al. Dec 2006 B2
7158680 Pace Jan 2007 B2
7162076 Liu Jan 2007 B2
7162101 Itokawa et al. Jan 2007 B2
7171023 Kim et al. Jan 2007 B2
7171025 Rui et al. Jan 2007 B2
7190829 Zhang et al. Mar 2007 B2
7194114 Schneiderman Mar 2007 B2
7200249 Okubo et al. Apr 2007 B2
7218759 Ho et al. May 2007 B1
7227976 Jung et al. Jun 2007 B1
7254257 Kim et al. Aug 2007 B2
7269292 Steinberg Sep 2007 B2
7274822 Zhang et al. Sep 2007 B2
7274832 Nicponski Sep 2007 B2
7295233 Steinberg et al. Nov 2007 B2
7315630 Steinberg et al. Jan 2008 B2
7315631 Corcoran et al. Jan 2008 B1
7317815 Steinberg et al. Jan 2008 B2
7321391 Ishige Jan 2008 B2
7336821 Ciuc et al. Feb 2008 B2
7352393 Sakamoto Apr 2008 B2
7362368 Steinberg et al. Apr 2008 B2
7403643 Ianculescu et al. Jul 2008 B2
7440593 Steinberg et al. Oct 2008 B1
7502494 Tafuku et al. Mar 2009 B2
7515740 Corcoran et al. Apr 2009 B2
7551211 Taguchi et al. Jun 2009 B2
7612794 He et al. Nov 2009 B2
7620214 Chen et al. Nov 2009 B2
7623733 Hirosawa Nov 2009 B2
7636485 Simon et al. Dec 2009 B2
7652693 Miyashita et al. Jan 2010 B2
7733388 Asaeda Jun 2010 B2
20010005222 Yamaguchi Jun 2001 A1
20010028731 Covell et al. Oct 2001 A1
20010031142 Whiteside Oct 2001 A1
20010038712 Loce et al. Nov 2001 A1
20010038714 Masumoto et al. Nov 2001 A1
20020081003 Sobol Jun 2002 A1
20020105662 Patton et al. Aug 2002 A1
20020106114 Yan et al. Aug 2002 A1
20020114535 Luo Aug 2002 A1
20020118287 Grosvenor et al. Aug 2002 A1
20020136433 Lin Sep 2002 A1
20020150291 Naf et al. Oct 2002 A1
20020150662 Dewis et al. Oct 2002 A1
20020168108 Loui et al. Nov 2002 A1
20020172419 Lin et al. Nov 2002 A1
20020176609 Hsieh et al. Nov 2002 A1
20020181801 Needham et al. Dec 2002 A1
20020191861 Cheatle Dec 2002 A1
20030012414 Luo Jan 2003 A1
20030023974 Dagtas et al. Jan 2003 A1
20030025812 Slatter Feb 2003 A1
20030035573 Duta et al. Feb 2003 A1
20030048950 Savakis et al. Mar 2003 A1
20030052991 Stavely et al. Mar 2003 A1
20030059107 Sun et al. Mar 2003 A1
20030059121 Savakis et al. Mar 2003 A1
20030071908 Sannoh et al. Apr 2003 A1
20030084065 Lin et al. May 2003 A1
20030107649 Flickner et al. Jun 2003 A1
20030117501 Shirakawa Jun 2003 A1
20030118216 Goldberg Jun 2003 A1
20030123713 Geng Jul 2003 A1
20030123751 Krishnamurthy et al. Jul 2003 A1
20030142209 Yamazaki et al. Jul 2003 A1
20030151674 Lin Aug 2003 A1
20030169907 Edwards et al. Sep 2003 A1
20030202715 Kinjo Oct 2003 A1
20040022435 Ishida Feb 2004 A1
20040095359 Simon et al. May 2004 A1
20040120391 Lin et al. Jun 2004 A1
20040120399 Kato Jun 2004 A1
20040170397 Ono Sep 2004 A1
20040175021 Porter et al. Sep 2004 A1
20040179719 Chen et al. Sep 2004 A1
20040218832 Luo et al. Nov 2004 A1
20040223649 Zacks et al. Nov 2004 A1
20040228505 Sugimoto Nov 2004 A1
20050013479 Xiao et al. Jan 2005 A1
20050036044 Funakura Feb 2005 A1
20050041121 Steinberg et al. Feb 2005 A1
20050068446 Steinberg et al. Mar 2005 A1
20050068452 Steinberg et al. Mar 2005 A1
20050069208 Morisada Mar 2005 A1
20050089218 Chiba Apr 2005 A1
20050104848 Yamaguchi et al. May 2005 A1
20050105780 Ioffe May 2005 A1
20050140801 Prilutsky et al. Jun 2005 A1
20050185054 Edwards et al. Aug 2005 A1
20050231625 Parulski et al. Oct 2005 A1
20050275721 Ishii Dec 2005 A1
20060006077 Mosher et al. Jan 2006 A1
20060008152 Kumar et al. Jan 2006 A1
20060008173 Matsugu et al. Jan 2006 A1
20060018517 Chen et al. Jan 2006 A1
20060029265 Kim et al. Feb 2006 A1
20060039690 Steinberg et al. Feb 2006 A1
20060050933 Adam et al. Mar 2006 A1
20060098875 Sugimoto May 2006 A1
20060098890 Steinberg et al. May 2006 A1
20060120599 Steinberg et al. Jun 2006 A1
20060140455 Costache et al. Jun 2006 A1
20060147192 Zhang et al. Jul 2006 A1
20060177100 Zhu et al. Aug 2006 A1
20060177131 Porikli Aug 2006 A1
20060203106 Lawrence et al. Sep 2006 A1
20060203107 Steinberg et al. Sep 2006 A1
20060204034 Steinberg et al. Sep 2006 A1
20060204055 Steinberg et al. Sep 2006 A1
20060204056 Steinberg et al. Sep 2006 A1
20060204058 Kim et al. Sep 2006 A1
20060204110 Steinberg et al. Sep 2006 A1
20060210264 Saga Sep 2006 A1
20060228037 Simon et al. Oct 2006 A1
20060257047 Kameyama et al. Nov 2006 A1
20060268150 Kameyama et al. Nov 2006 A1
20060269270 Yoda et al. Nov 2006 A1
20060280380 Li Dec 2006 A1
20060285754 Steinberg et al. Dec 2006 A1
20060291739 Li et al. Dec 2006 A1
20070018966 Blythe et al. Jan 2007 A1
20070070440 Li et al. Mar 2007 A1
20070071347 Li et al. Mar 2007 A1
20070091203 Peker et al. Apr 2007 A1
20070098303 Gallagher et al. May 2007 A1
20070110305 Corcoran et al. May 2007 A1
20070116379 Corcoran et al. May 2007 A1
20070116380 Ciuc et al. May 2007 A1
20070133901 Aiso Jun 2007 A1
20070154095 Cao et al. Jul 2007 A1
20070154096 Cao et al. Jul 2007 A1
20070160307 Steinberg et al. Jul 2007 A1
20070189748 Drimbarean et al. Aug 2007 A1
20070189757 Steinberg et al. Aug 2007 A1
20070201724 Steinberg et al. Aug 2007 A1
20070296833 Corcoran et al. Dec 2007 A1
20080013798 Ionita et al. Jan 2008 A1
20080037827 Corcoran et al. Feb 2008 A1
20080037839 Corcoran et al. Feb 2008 A1
20080037840 Steinberg et al. Feb 2008 A1
20080043122 Steinberg et al. Feb 2008 A1
20080049970 Ciuc et al. Feb 2008 A1
20080055433 Steinberg et al. Mar 2008 A1
20080075385 David et al. Mar 2008 A1
20080144966 Steinberg et al. Jun 2008 A1
20080175481 Petrescu et al. Jul 2008 A1
20080205712 Ionita et al. Aug 2008 A1
20080240555 Nanu et al. Oct 2008 A1
20090052750 Steinberg et al. Feb 2009 A1
20090175609 Tan Jul 2009 A1
Foreign Referenced Citations (24)
Number Date Country
1128316 Aug 2001 EP
1441497 Jul 2004 EP
1453002 Sep 2004 EP
1626569 Feb 2006 EP
1887511 Feb 2008 EP
2370438 Jun 2002 GB
5260360 Oct 1993 JP
25164475 Jun 2005 JP
26005662 Jan 2006 JP
26254358 Sep 2006 JP
WO 0076398 Dec 2000 WO
WO-02052835 Jul 2002 WO
WO-2007095477 Aug 2007 WO
WO-2007095477 Aug 2007 WO
WO-2007095483 Aug 2007 WO
WO-2007095553 Aug 2007 WO
WO-2007095553 Aug 2007 WO
WO 2007128117 Nov 2007 WO
WO-2007142621 Dec 2007 WO
WO-2008015586 Feb 2008 WO
WO-2008015586 Feb 2008 WO
WO-2008018887 Feb 2008 WO
WO-2008023280 Feb 2008 WO
WO-2008104549 Sep 2008 WO
Related Publications (1)
Number Date Country
20090244296 A1 Oct 2009 US