Face categorization and annotation of a mobile phone contact list

Information

  • Patent Grant
  • 8189927
  • Patent Number
    8,189,927
  • Date Filed
    Tuesday, March 4, 2008
    16 years ago
  • Date Issued
    Tuesday, May 29, 2012
    12 years ago
Abstract
A method of face categorization and annotation of a face image library includes automatically cropping a face within an acquired digital image or removing one or more non-facial items from the digital image, or both, and thereby generating a full-size face image. The full-size face image is stored with other indicia identifying a person corresponding to the face in a face image library of an embedded device such as a mobile camera phone or other handheld camera device.
Description
FIELD OF INVENTION

The invention relates to annotating images based on face regions within them.


BACKGROUND

Mobile phones and desktop applications are known to include address books. It is desired to have improved functionality in this regard for mobile devices such as mobile phones.


Usually, it is impossible or uncomfortable to bring a phone or other mobile camera device close enough to a user's face in order to fill in a frame. Thus, in many cases, a face takes a small portion of the image, and thus especially when sub-sampled as part of contact data, it can become unrecognizable. It is desired to alleviate this problem without overburdening the user.


SUMMARY OF THE INVENTION

A method is provided for face categorization and annotation of a face image library. A digital image acquisition device such as a mobile camera phone or other handheld camera device, acquires a digital image of a scene that includes a face. The face is automatically cropped or one or more non-facial items is/are removed from the digital image, or both, and a full-size face image is generated. The full-size face image is stored in a face image library along with other indicia identifying a person corresponding to the face.


The face image library may include an address book or a contact list, or both, of a mobile camera phone or other handheld camera device.


A series of preview images may be acquired, and candidate face regions may be extracted from successive frames. Location data and a cumulative confidence level that the candidate face region comprises a face may be maintained, and based on information from the series of preview images, the method may include determining that the face is present within the digital image.


Manual input of further information relating to the face may be received for storing with the full-size face image. Other indicia may be input manually by a user of the digital image acquisition device. The face may be displayed, and the user may be prompted to associate the face with the identifying indicia. A list of probable members of a contact list may be displayed, and a selection may be made from the list by the user.


The generating of the full-size face image may include building a whole face from two or more partial face images, brightening a poorly illuminated face, rotating a rotated or tilted face, correcting a red-eye, white eye or golden eye defect, and/or correcting a photographic blemish artifact within the face of the digital image, or combinations of these.


The method may include automatically transmitting the digital image to one or more persons identified within the image or to a user of the digital image acquisition device, or both.


The person identified with the face may be associated with an external device or service or both, and the digital image may be automatically transmitted to the external device or service or both.


A manual selection of a level of cropping of the face from the digital image may be made by a user.


A smile, an open eye and/or another partial face portion, may be added, from one or more stored facial images of the same identified person.


Face recognition may be applied to the face based on a library of known face images.


One or more computer readable media are also provided that are encoded with a computer program for programming one or more processors to perform any of the methods described herein.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates a method of face categorization and annotation of a face image library including cropping of the face in accordance with certain embodiments.



FIG. 2 illustrates a face annotation method involving use of reference images in accordance with certain embodiments.



FIG. 3 illustrates a face annotation method involving manual input of identifying indicia in accordance with certain embodiments.



FIG. 4 illustrates a face annotation method including image processing enhancement of the face in accordance with certain embodiments.



FIG. 5 illustrates a method of face detection and identification including automatic transmission of an acquired image in accordance with certain embodiments.



FIG. 6 illustrates a method of face categorization and annotation of a face image library including a selected level of cropping of the face in accordance with certain embodiments.



FIG. 7 illustrates a method of face annotation including replacing a portion of the face from an image store in accordance with certain embodiments.



FIG. 8 illustrates a method of face detection and face recognition in accordance with certain embodiments.





DETAILED DESCRIPTION OF THE EMBODIMENTS

An advantageous method is provided for face categorization and annotation of a face image library. A digital image acquisition device such as a mobile camera phone or other handheld camera device, acquires a digital image of a scene that includes a face. The face is automatically cropped or one or more non-facial items is/are removed from the digital image, or both, and a full-size face image is generated. The full-size face image is stored in a face image library along with other indicia identifying a person corresponding to the face.


A Contact List may be used to link known or unknown, recognized or unrecognized, faces with personal contacts. Communications may be provided for performing additional services with images. Faces may be assigned to contacts and/or a user interface, wherein image quality is improved by cropping and/or otherwise cleaning up the image, e.g., to include only faces or to make the face a certain size. A photographically captured face may be assigned to a built-in contact management system of a handheld device.


A picture may be taken, e.g., by a mobile camera-enabled device of any kind. Multiple images may also preferably be captured around the time the picture was taken which are preview, postview or reference images (together “reference images”), typically having lower resolution than the main picture. A face detection routine then finds any faces in the picture with or without the use of the reference images. The picture can be enhanced using one or more of the reference images, e.g., to add illumination, to replace a frown with a smile, to replace a blink with an open eye, or to add an otherwise occluded feature to a face. An enhanced image of a face in the picture is provided by cropping or otherwise processing the picture with or without using the reference images, and/or using the reference images to provide a better image of the face detected in the picture.


The face may be recognized or unrecognized. If it is unrecognized, then it can be added to a contacts list, along with image metadata and whatever other information a user may wish to add. If it is recognized, then the picture may be added as another look of a same person (e.g., with or without a beard, hat, glasses, certain jewelry, smiling or frowning, eyes open or blinking, one profile or the other or straight on, etc.), or just a smile, e.g., from the new picture may be added over the frown of the man picture which is otherwise kept.


A technique is provided for tracking faces in a series of images on handheld mobile devices.


In one aspect, face categorization is enhanced using a built in contact book in the phone or other mobile device.


In another aspect, a workflow and GUI are provided, wherein a user takes a picture that is associated with a name in a contact book and uses face-tracking. The phone can crop the image as well as clean it up, e.g., to keep only the face. An advantage is that with this software, saving a face to a contact becomes useful as opposed to assigning a random picture which means in some cases the face may be so small that it may not be distinguishable or resolvable. Any of the techniques described herein may be combined with those described at U.S. Pat. Nos. 6,407,777, 7,310,450, 7,315,630, 7,336,821, and 7,315,631, and US published application no. 2006/0204110, 2006/0120599, 2006/0140455, 2006/0098890, 2007/0201725, and 2007/0110305, and U.S. application Ser. Nos. 11/753,098, 11/752,925, 11/833,224, 10/763,801, 60/829,883, 11/753,397, 11/766,674, and 11/773,868, which are assigned to the same assignee and are hereby incorporated by reference.



FIG. 1 illustrates a method of face categorization and annotation of a face image library including cropping of the face in accordance with certain embodiments. A digital image is acquired at 102 including a face. The face is then automatically cropped at 104 and/or a non-facial item is removed from the digital image at 106. A full-size face image is generated at 108. The full-size face image is stored at 110 with indicia identifying a person corresponding to the face.


A digital camera may employ a face tracker which analyzes a preview image stream and extracts candidate face regions from successive frames of the preview image stream. These candidate regions are made available within the camera for additional image processing. A detailed description is given in U.S. Pat. No. 7,315,631, which is incorporated by reference. The face tracker maintains a history of each candidate face region including location data and a cumulative confidence level that the candidate region is indeed a face.



FIG. 2 illustrates a face annotation method involving use of reference images in accordance with certain embodiments. A series of reference images are acquired at 202. The reference images may be preview or post-view images, or images acquired with a different sensor than the main image at the same time or at a different time than the acquisition of the main image, at full or low resolution (see, e.g., U.S. application 60/945,558, incorporated by reference). One or more candidate face regions are extracted from successive frames at 204. Location data and a cumulative confidence level that each candidate face region comprises a face are maintained at 206. The main image is acquired at 208 including a face. Based on the reference images, it is determined at 210 that the face is present within the main image. The face is automatically cropped at 212 and/or one or more non-facial items are removed form the digital image at 214. A full-size face image is then generated at 216 based on 212 and/or 214. A full-size face image is stored with indicia indentifying a person corresponding to the face at 218.


Certain embodiments involve devices such as hand held communication devices such as mobile phones, that have a “phone book” built into the device. Face detection is tied into a process wherein a user can assign a photographically-acquired face to an existing or a new contact. Moreover, the image-processing unit can provide a saved region corresponding to the captured face, e.g., using cropping and/or removing other unnecessary details and/or by building a whole face from partial face images and/or brightening a poorly illuminated or rotated or tilted or partially-occluded face, or a face with red-eye, white eye or golden eye defects, or other blemishes possibly induced by dust artifacts in the imaging system of the camera, for what may be otherwise a good picture of a particular contact, among other processing that is possible (see cited references below).


Example

The process can be directed from the phone book or from camera software. For example:

    • 1. In the Contact manager: Create new item
    • 2. In the contact manager: option to insert new data
    • 3. In the contact manager: Option to assign picture which will then:
    • 4. let the user select: Camera or albums.
    • 5. For Camera, the user will take a picture of a subject and the camera will save the cropped image.
    • 6. For the Album, the handset will display the captured images with faces, and the user can select the right picture.
    • 7. Returning tot he contact manager: The face will be assigned to the contact.


Starting from the Camera system:

    • a. When the user grabs a picture (and optionally a face is detected) and has an option to save image or to “assign face to a contact”
    • b. The user will then select an existing contact or create a new contactto assign the face to.


The system can also be connected to a face recognition subsystem.


In other embodiments, the image acquisition appliance includes a smartphone which incorporates full mobile phone capability as well as camera capabilities. In this embodiment the recognition subsystems perform an analysis of detected face regions, and extract a pattern of DCT feature vectors, and determine if such face regions match with any of a set of “known” patterns. These “known” patterns will typically have been derived from a larger image collection stored on a user's PC or in a web service and match people in the user's friends, colleagues and family, but it may be stored on the mobile device. We remark that each person may be associated with more than one pattern and people can have different appearances at different times. If a face region matches a “known” pattern, then that face region, and the image it was extracted from, can be associated with the “known” pattern and the person that pattern is linked to. Some aspects of associating multiple face recognition patterns, or “faceprints” with individual persons, or “contacts”, are described at U.S. patent application Ser. No. 10/764,339, which is hereby incorporated by reference. Some recognition functions may be performed on a mobile device and the remainder on a desktop PC.


Initial training or learning may be performed outside the phone, e.g., in an expert system format, because better results in training and associating patterns with people can initially be achieved with larger image collections. Nevertheless it is possible to implement training, from scratch, within the device, although the process may be tedious for a casual user.


Certain embodiments provide for the creation of linkages between the known patterns of a face recognition system, and a phone contact list of a user. This can be achieved either through uploading a contact list onto a user's PC and performing association through a direct matching of face regions associated with each “known” pattern with a member of a contact list. Alternatively, it can be achieved on the phone by cycling through “known” patterns and displaying associated face regions.


In other embodiments, ease of usability of contact management is provided on a hand-held device using a built in camera.


Advantageously, a frame is cropped in certain embodiments herein to more substantially fill the frame with the face.


Furthermore, using the camera in connection with a built in contact management of a device enables use of the mobile phone as an annotation device for improving the quality of the recognition process and creating links between newly determined recognition patterns and the contact list, e.g., through a single user action. It relies on the fact that many images will either contain “unknown” people or that “known” people will occasionally be captured with a different appearance from their normal set of patterns. When such unrecognized facial regions are detected by the recognition subsystem, it displays an extracted face region on the screen and prompts the user to associate this “new” region with a member of the contact list. In alternative embodiments, the region may have a probabilistic association with members of the contact list and these may be ordered according to the determined probabilities.



FIG. 3 illustrates a face annotation method involving manual input of identifying indicia in accordance with certain embodiments. A digital image is acquired at 302 including a face. The face is then automatically cropped at 304 and/or a non-facial item is removed from the digital image at 306. A full-size face image is generated at 308. The full-size face image is displayed at 310. The user may be prompted to identify a face at 312, after which manual input may be received at 314. A list of probably members may be displayed at 316 from which a selection may be received from the list at 318. Finally, the full-size face image is stored with indicia identifying the person corresponding to the face.


The face-tracking system, discussed above with reference to FIG. 2, may also automatically rotate the image when determined to be advantageous based on the detection of a face at a particular angle. The software may allow “rotating” the face if it is not full frontal.



FIG. 4 illustrates a face annotation method including image processing enhancement of the face in accordance with certain embodiments. A digital image is acquired at 402 including a face. The face is then automatically cropped at 404 and/or a non-facial item is removed from the digital image at 406. A full-size face image is generated at 408. Several image processing options are available. A whole face may be built from two or more partial facial images at 410. A poorly illuminated face may be brightened at 412. A rotated face may be rotated right or left at 414 or up or down at 416, or in a random direction. An eye defect may be corrected at 418, such as red eye 420, white eye 422 or golden eye 424 (see U.S. Pat. Nos. 7,042,505, and 7,336,821, and US published applications 2005/0041121, 2006/0120599, and 2007/0116380, and U.S. application Ser. Nos. 11/462,035, 60/865,662, 60/892,884, 11/690,834, 11/769,206, and 11/841,855, which are hereby incorporated by reference). A photographic blemish artifact may be corrected at 426. Such blemish may be caused by an imperfection in the optical path caused by dust (see US published application 2005/0068452 and U.S. application Ser. No. 11/836,744, which are incorporated by reference). A full size face image may be stored at 428 along with identifying indicia of a person corresponding to the face.


This associating of the new face region with a member of the contact list achieves at least the following advantageous results in a single action:


It firstly associates the recognition pattern which is derived from the face region with a person in the user's contact list; this information can now be added to the set of recognition patterns and can be applied later as part of a retraining process for optimizing the set of recognition patterns associated with a user's image collection.


Another result is that is provides an association between this new image and a communications device or system with the person determined to be within the image. This could be an e-mail address or a mobile phone number. This association enables a range of added value picture services, an example of which is to enable the automated transmitting of the image to the people within the image. Faces may be found in an image, and the image may be automatically emailed to a user and/or persons associated with the faces found (see, e.g., US published patent application no. 20040243671, which is incorporated by reference), although this does not use inherent communications capabilities of the device in which the images are acquired. An enhancement of this, which relates to the “pairing mechanism” described in US published application no. 2006/0284982, which is incorporated by reference, is to provide a pairing mechanism which is triggered by selecting a member of the contact list; in this embodiment a user can associate such a member of the list with an external, networked device or service. Once such an association is established, each image which is recognized as being associated with that person can be marked for transmission to the associated device/service, placed in a transmission queue and, when the service/device next becomes available on the network, these images can be transmitted to that device/service.



FIG. 5 illustrates a method of face detection and identification including automatic transmission of an acquired image in accordance with certain embodiments. A digital image is acquired at 502 including a face. The face is then automatically cropped at 504 and/or a non-facial item is removed from the digital image at 506. A full-size face image is generated at 508. A full size face image may be stored at 510 along with identifying indicia of a person corresponding to the face. In addition, the digital image may be transmitted at 512 to an identified person or to the camera user, depending or not on whether the person is recognized in the image. A person may be associated at 514 with an external device or service. The digital image may be automatically transmitted to the external device or service at 516 depending or not on whether the person is recognized in the image.


Alternative Methods

The acquired image may be added to such database as part of the process.


In the case multiple faces are detected, a user interface may be implemented that will allow walking-through face-by-face for the user to decide if that is a face they would like to include or pass.


In a case where a camera is set in a mode of “assigning a face to a contact”, there may not be a desire to “capture” an image, but rather, the camera in preview (video) mode may continuously capture multiple images until an “acceptable image” is acquired. Such acceptable image may be a super-resolution of multiple frames, when a face is detected in frontal mode, when the image reaches focus on the face, when the light is sufficient, etc.


The process as defined herein can be extended to support Desktop based contact management software such as “ACT!” and Microsoft Outlook.


Example





    • User selects a contact;

    • User chooses the option “add image”;

    • User browses a selection of images (e.g. Folders); and

    • User selects a single image, or alternatively selects a video clip Software detects face regions; and

    • Software crops the image to include only face (and optionally face or face-and-shoulders) The software may select the level of cropping (face only, head & shoulders, etc)






FIG. 6 illustrates a method of face categorization and annotation of a face image library including a selected level of cropping of the face in accordance with certain embodiments. A digital image is acquired at 602 including a face. A manual selection of a level of cropping is received at 604 before or after the image is acquired. The face is then automatically cropped at 606. A full-size face image is generated at 608. The full size face image is stored at 610 along with identifying indicia of a person corresponding to the face.


Image processing can be added to add facial expressions such as smile. Accordingly, FIG. 7 illustrates a method of face annotation including replacing a portion of the face from an image store in accordance with certain embodiments. A digital image is acquired at 702 including a face. The face is then automatically cropped at 704. A non-facial item may be removed also at 704 or instead of cropping, and/or another face or other faces may be removed at 704. A full-size face image is generated at 706. A smile may be added from a stored image of the same person at 708, e.g., to replace a frown or occluded mouth in the image, or alternatively from a different person who may look more or less like the person. An open eye or other partial facial portion may be added at 710 from a stored image. The full size face image is stored at 710 along with identifying indicia of a person corresponding to the face.


For the recognition of known faces, the database may reside out of the handset (on the server), in case it is necessary to access a larger database than is desirable or perhaps than is possible on an handheld camera phone or other camera device.



FIG. 8 illustrates a method of face detection and face recognition in accordance with certain embodiments. A digital image is acquired at 802 including a face. The face is then automatically cropped at 804. Again, a non-facial item may be removed also at 804 or instead of cropping, and/or another face or other faces may be removed at 804. A full-size face image is generated at 806. Face recognition may be applied at 808 to the face based on a library of known face images. The full size face image is stored at 810 along with identifying indicia of a person corresponding to the face.


While an exemplary drawings and specific embodiments of the present invention have been described and illustrated, it is to be understood that that the scope of the present invention is not to be limited to the particular embodiments discussed. Thus, the embodiments shall be regarded as illustrative rather than restrictive, and it should be understood that variations may be made in those embodiments by workers skilled in the arts without departing from the scope of the present invention.


In addition, in methods that may be performed according to preferred embodiments herein and that may have been described above, the operations have been described in selected typographical sequences. However, the sequences have been selected and so ordered for typographical convenience and are not intended to imply any particular order for performing the operations, except for those where a particular order may be expressly set forth or where those of ordinary skill in the art may deem a particular order to be necessary.


In addition, all references cited herein as well as the background, invention summary, abstract and brief description of the drawings, as well as U.S. Pat. No. 6,407,777, and US published patent application nos. 20040243671 (which discloses to use faces in emails), US 20040174434 (which discloses determining meta-information by sending to a server; then back to a mobile device), 2005/0041121, 2005/0031224, 2005/0140801, 2006/0204110, 2006/0093212, 2006/0120599, and 2006/0140455, and U.S. patent application Nos. 60/773,714, 60/804,546, 60/865,375, 60/865,622, 60/829,127, 60/829,127, 60/821,165 Ser. Nos. 11/554,539, 11/464,083, 11/462,035, 11/282,954, 11/027,001, 10/764,339, 10/842,244, 11/024,046, 11/233,513, and 11/460,218, are all incorporated by reference into the detailed description of the preferred embodiments as disclosing alternative embodiments.

Claims
  • 1. A method of face categorization and annotation of a face image library, comprising: acquiring with a digital image acquisition device a digital image of a scene that includes a face;acquiring a series of relatively low resolution reference images captured around the time of capture of said digital image;extracting candidate face regions from successive frames of said reference images;maintaining information from the successive frames of said reference images, including location data and a cumulative confidence level that the candidate face regions extracted from said successive frame of said series of reference images comprise said face, and based on the information from the series of reference images, determining that said face is present within said digital image;automatically cropping the face or removing one or more non-facial items from the digital image, or both, and thereby generating a full-size face image; andstoring the full-size face image with other indicia identifying a person corresponding to the face in a face image library.
  • 2. The method of claim 1, wherein the face image library comprises an address book or a contact list, or both, of a mobile camera phone or other handheld camera device.
  • 3. The method of claim 2, further comprising receiving manual input of further information relating to the face for storing with the full-size face image.
  • 4. The method of claim 2, further comprising receiving said other indicia manually by a user of the digital image acquisition device.
  • 5. The method of claim 4, further comprising displaying the face and prompting the user to associate the face with the identifying indicia.
  • 6. The method of claim 5, further comprising displaying a list of probable members of a contact list and receiving a selection from the list by the user.
  • 7. The method of claim 2, wherein the generating of the full-size face image further comprises building a whole face from two or more partial face images.
  • 8. The method of claim 2, wherein the generating of the full-size face image further comprises brightening a poorly illuminated face, or rotating a rotated or tilted face, or combinations thereof.
  • 9. The method of claim 2, wherein the generating of the full-size face image further comprises correcting a red-eye, white eye or golden eye defect, or combinations thereof.
  • 10. The method of claim 2, wherein the generating of the full-size face image further comprises correcting a photographic blemish artifact within the face of the digital image.
  • 11. The method of claim 2, further comprising automatically transmitting the digital image to one or more persons identified within the image or to a user of the digital image acquisition device, or both.
  • 12. The method of claim 2, further comprising associating said person with an external device or service or both, and automatically transmitting said digital image to the external device or service or both.
  • 13. The method of claim 2, further comprising receiving manual selection of a level of cropping of the face from the digital image.
  • 14. The method of claim 2, further comprising adding a smile or open eye or other partial face portion, or combinations thereof, from one or more stored facial images of said same person.
  • 15. The method of claim 2, further comprising applying face recognition to the face based on a library of known face images.
  • 16. One or more non-transitory computer readable media encoded with a computer program for programming one or more processors to perform a method of face categorization and annotation of a face image library, the method comprising: acquiring with a digital image acquisition device a digital image of a scene that includes a face;acquiring a series of relatively low resolution reference images captured around the time of capture of said digital image;extracting candidate face regions from successive frames of said reference images;maintaining information from the successive frames of the reference images, including location data and a cumulative confidence level that the candidate face regions extracted from said successive frame of said series of reference images comprise said face, and based on the information from the series of reference images, determining that said face is present within said digital image;automatically cropping the face or removing one or more non-facial items from the digital image, or both, and thereby generating a full-size face image; andstoring the full-size face image with other indicia identifying a person corresponding to the face in a face image library.
  • 17. The one or more non-transitory computer readable media of claim 16, wherein the face image library comprises an address book or a contact list, or both, of a mobile camera phone or other handheld camera device.
  • 18. The one or more non-transitory computer readable media of claim 17, wherein the method further comprises receiving manual input of further information relating to the face for storing with the full-size face image.
  • 19. The one or more non-transitory computer readable media of claim 17, wherein the method further comprises receiving said other indicia manually by a user of the digital image acquisition device.
  • 20. The one or more non-transitory computer readable media of claim 19, wherein the method further comprises displaying the face and prompting the user to associate the face with the identifying indicia.
  • 21. The one or more non-transitory computer readable media of claim 20, wherein the method further comprises displaying a list of probable members of a contact list and receiving a selection from the list by the user.
  • 22. The one or more non-transitory computer readable media of claim 17, wherein the generating of the full-size face image further comprises building a whole face from two or more partial face images.
  • 23. The one or more non-transitory computer readable media of claim 17, wherein the generating of the full-size face image further comprises brightening a poorly illuminated face, or rotating a rotated or tilted face, or combinations thereof.
  • 24. The one or more non-transitory computer readable media of claim 17, wherein the generating of the full-size face image further comprises correcting a red-eye, white eye or golden eye defect, or combinations thereof.
  • 25. The one or more non-transitory computer readable media of claim 17, wherein the generating of the full-size face image further comprises correcting a photographic blemish artifact within the face of the digital image.
  • 26. The one or more non-transitory computer readable media of claim 17, wherein the method further comprises automatically transmitting the digital image to one or more persons identified within the image or to a user of the digital image acquisition device, or both.
  • 27. The one or more non-transitory computer readable media of claim 17, wherein the method further comprises associating said person with an external device or service or both, and automatically transmitting said digital image to the external device or service or both.
  • 28. The one or more non-transitory computer readable media of claim 17, wherein the method further comprises receiving manual selection of a level of cropping of the face from the digital image.
  • 29. The one or more non-transitory computer readable media of claim 17, wherein the method further comprises adding a smile or open eye or other partial face portion, or combinations thereof, from one or more stored facial images of said same person.
  • 30. The one or more non-transitory computer readable media of claim 17, wherein the method further comprises applying face recognition to the face based on a library of known face images.
PRIORITY

This application claims the benefit of priority to U.S. patent application No. 60/893,114, filed Mar. 5, 2007, which is incorporated by reference.

US Referenced Citations (213)
Number Name Date Kind
4047187 Mashimo et al. Sep 1977 A
4317991 Stauffer Mar 1982 A
4376027 Smith et al. Mar 1983 A
RE31370 Mashimo et al. Sep 1983 E
4638364 Hiramatsu Jan 1987 A
5018017 Sasaki et al. May 1991 A
RE33682 Hiramatsu Sep 1991 E
5063603 Burt Nov 1991 A
5164831 Kuchta et al. Nov 1992 A
5164992 Turk et al. Nov 1992 A
5227837 Terashita Jul 1993 A
5280530 Trew et al. Jan 1994 A
5291234 Shindo et al. Mar 1994 A
5311240 Wheeler May 1994 A
5384912 Ogrinc et al. Jan 1995 A
5430809 Tomitaka Jul 1995 A
5432863 Benati et al. Jul 1995 A
5488429 Kojima et al. Jan 1996 A
5496106 Anderson Mar 1996 A
5500671 Andersson et al. Mar 1996 A
5576759 Kawamura et al. Nov 1996 A
5633678 Parulski et al. May 1997 A
5638136 Kojima et al. Jun 1997 A
5642431 Poggio et al. Jun 1997 A
5680481 Prasad et al. Oct 1997 A
5684509 Hatanaka et al. Nov 1997 A
5706362 Yabe Jan 1998 A
5710833 Moghaddam et al. Jan 1998 A
5724456 Boyack et al. Mar 1998 A
5744129 Dobbs et al. Apr 1998 A
5745668 Poggio et al. Apr 1998 A
5774129 Poggio et al. Jun 1998 A
5774747 Ishihara et al. Jun 1998 A
5774754 Ootsuka Jun 1998 A
5781650 Lobo et al. Jul 1998 A
5802208 Podilchuk et al. Sep 1998 A
5812193 Tomitaka et al. Sep 1998 A
5818975 Goodwin et al. Oct 1998 A
5835616 Lobo et al. Nov 1998 A
5842194 Arbuckle Nov 1998 A
5844573 Poggio et al. Dec 1998 A
5852823 De Bonet Dec 1998 A
5870138 Smith et al. Feb 1999 A
5911139 Jain et al. Jun 1999 A
5911456 Tsubouchi et al. Jun 1999 A
5978519 Bollman et al. Nov 1999 A
5991456 Rahman et al. Nov 1999 A
6053268 Yamada Apr 2000 A
6072904 Desai et al. Jun 2000 A
6097470 Buhr et al. Aug 2000 A
6101271 Yamashita et al. Aug 2000 A
6128397 Baluja et al. Oct 2000 A
6142876 Cumbers Nov 2000 A
6148092 Qian Nov 2000 A
6188777 Darrell et al. Feb 2001 B1
6192149 Eschbach et al. Feb 2001 B1
6234900 Cumbers May 2001 B1
6246790 Huang et al. Jun 2001 B1
6249315 Holm Jun 2001 B1
6263113 Abdel-Mottaleb et al. Jul 2001 B1
6268939 Klassen et al. Jul 2001 B1
6282317 Luo et al. Aug 2001 B1
6301370 Steffens et al. Oct 2001 B1
6332033 Qian Dec 2001 B1
6349373 Sitka et al. Feb 2002 B2
6351556 Loui et al. Feb 2002 B1
6389181 Shaffer et al. May 2002 B2
6393148 Bhaskar May 2002 B1
6400470 Takaragi et al. Jun 2002 B1
6400830 Christian et al. Jun 2002 B1
6404900 Qian et al. Jun 2002 B1
6407777 DeLuca Jun 2002 B1
6418235 Morimoto et al. Jul 2002 B1
6421468 Ratnakar et al. Jul 2002 B1
6430307 Souma et al. Aug 2002 B1
6430312 Huang et al. Aug 2002 B1
6438264 Gallagher et al. Aug 2002 B1
6456732 Kimbell et al. Sep 2002 B1
6459436 Kumada et al. Oct 2002 B1
6473199 Gilman et al. Oct 2002 B1
6501857 Gotsman et al. Dec 2002 B1
6502107 Nishida Dec 2002 B1
6504942 Hong et al. Jan 2003 B1
6504951 Luo et al. Jan 2003 B1
6516154 Parulski et al. Feb 2003 B1
6526161 Yan Feb 2003 B1
6554705 Cumbers Apr 2003 B1
6556708 Christian et al. Apr 2003 B1
6564225 Brogliatti et al. May 2003 B1
6567775 Maali et al. May 2003 B1
6567983 Shiimori May 2003 B1
6606398 Cooper Aug 2003 B2
6633655 Hong et al. Oct 2003 B1
6661907 Ho et al. Dec 2003 B2
6697503 Matsuo et al. Feb 2004 B2
6697504 Tsai Feb 2004 B2
6754389 Dimitrova et al. Jun 2004 B1
6760465 McVeigh et al. Jul 2004 B2
6765612 Anderson et al. Jul 2004 B1
6783459 Cumbers Aug 2004 B2
6801250 Miyashita Oct 2004 B1
6826300 Liu et al. Nov 2004 B2
6850274 Silverbrook et al. Feb 2005 B1
6876755 Taylor et al. Apr 2005 B1
6879705 Tao et al. Apr 2005 B1
6928231 Tajima Aug 2005 B2
6940545 Ray et al. Sep 2005 B1
6965684 Chen et al. Nov 2005 B2
6993157 Oue et al. Jan 2006 B1
7003135 Hsieh et al. Feb 2006 B2
7020337 Viola et al. Mar 2006 B2
7027619 Pavlidis et al. Apr 2006 B2
7035456 Lestideau Apr 2006 B2
7035467 Nicponski Apr 2006 B2
7038709 Verghese May 2006 B1
7038715 Flinchbaugh May 2006 B1
7046339 Stanton et al. May 2006 B2
7050607 Li et al. May 2006 B2
7064776 Sumi et al. Jun 2006 B2
7082212 Liu et al. Jul 2006 B2
7092555 Lee et al. Aug 2006 B2
7099510 Jones et al. Aug 2006 B2
7110575 Chen et al. Sep 2006 B2
7113641 Eckes et al. Sep 2006 B1
7119838 Zanzucchi et al. Oct 2006 B2
7120279 Chen et al. Oct 2006 B2
7151843 Rui et al. Dec 2006 B2
7158680 Pace Jan 2007 B2
7162076 Liu Jan 2007 B2
7162101 Itokawa et al. Jan 2007 B2
7171023 Kim et al. Jan 2007 B2
7171025 Rui et al. Jan 2007 B2
7175528 Cumbers Feb 2007 B1
7187786 Kee Mar 2007 B2
7190829 Zhang et al. Mar 2007 B2
7200249 Okubo et al. Apr 2007 B2
7218759 Ho et al. May 2007 B1
7227976 Jung et al. Jun 2007 B1
7254257 Kim et al. Aug 2007 B2
7274822 Zhang et al. Sep 2007 B2
7274832 Nicponski Sep 2007 B2
7315631 Corcoran et al. Jan 2008 B1
7317816 Ray et al. Jan 2008 B2
7324670 Kozakaya et al. Jan 2008 B2
7330570 Sogo et al. Feb 2008 B2
7357717 Cumbers Apr 2008 B1
7440594 Takenaka Oct 2008 B2
7551755 Steinberg et al. Jun 2009 B1
7555148 Steinberg et al. Jun 2009 B1
7558408 Steinberg et al. Jul 2009 B1
7564994 Steinberg et al. Jul 2009 B1
7587068 Steinberg et al. Sep 2009 B1
20010028731 Covell et al. Oct 2001 A1
20010031129 Tajima Oct 2001 A1
20010031142 Whiteside Oct 2001 A1
20020105662 Patton et al. Aug 2002 A1
20020106114 Yan et al. Aug 2002 A1
20020113879 Battle et al. Aug 2002 A1
20020114535 Luo Aug 2002 A1
20020132663 Cumbers Sep 2002 A1
20020136433 Lin Sep 2002 A1
20020141586 Margalit et al. Oct 2002 A1
20020154793 Hillhouse et al. Oct 2002 A1
20020168108 Loui et al. Nov 2002 A1
20020172419 Lin et al. Nov 2002 A1
20030025812 Slatter Feb 2003 A1
20030035573 Duta et al. Feb 2003 A1
20030048926 Watanabe Mar 2003 A1
20030048950 Savakis et al. Mar 2003 A1
20030052991 Stavely et al. Mar 2003 A1
20030059107 Sun et al. Mar 2003 A1
20030059121 Savakis et al. Mar 2003 A1
20030084065 Lin et al. May 2003 A1
20030086134 Enomoto May 2003 A1
20030086593 Liu et al. May 2003 A1
20030107649 Flickner et al. Jun 2003 A1
20030118216 Goldberg Jun 2003 A1
20030122839 Matraszek et al. Jul 2003 A1
20030128877 Nicponski Jul 2003 A1
20030156202 Van Zee Aug 2003 A1
20030158838 Okusa Aug 2003 A1
20030198368 Kee Oct 2003 A1
20030210808 Chen et al. Nov 2003 A1
20040008258 Aas et al. Jan 2004 A1
20040136574 Kozakaya et al. Jul 2004 A1
20040145660 Kusaka Jul 2004 A1
20040207722 Koyama et al. Oct 2004 A1
20040210763 Jonas Oct 2004 A1
20040213454 Lai et al. Oct 2004 A1
20040223063 DeLuca et al. Nov 2004 A1
20040264780 Zhang et al. Dec 2004 A1
20050013479 Xiao et al. Jan 2005 A1
20050036676 Heisele Feb 2005 A1
20050063569 Colbert et al. Mar 2005 A1
20050069208 Morisada Mar 2005 A1
20060006077 Mosher et al. Jan 2006 A1
20060018521 Avidan Jan 2006 A1
20060104488 Bazakos et al. May 2006 A1
20060120599 Steinberg et al. Jun 2006 A1
20060140055 Ehrsam et al. Jun 2006 A1
20060140455 Costache et al. Jun 2006 A1
20060177100 Zhu et al. Aug 2006 A1
20060177131 Porikli Aug 2006 A1
20060228040 Simon et al. Oct 2006 A1
20060239515 Zhang et al. Oct 2006 A1
20060251292 Gokturk et al. Nov 2006 A1
20070011651 Wagner Jan 2007 A1
20070091203 Peker et al. Apr 2007 A1
20070098303 Gallagher et al. May 2007 A1
20070154095 Cao et al. Jul 2007 A1
20070154096 Cao et al. Jul 2007 A1
20080137919 Kozakaya et al. Jun 2008 A1
20080144966 Steinberg et al. Jun 2008 A1
Foreign Referenced Citations (9)
Number Date Country
2370438 Jun 2002 GB
5260360 Oct 1993 JP
WO-2007142621 Dec 2007 WO
WO-2008015586 Feb 2008 WO
WO 2008109622 Sep 2008 WO
WO2008107112 Sep 2008 WO
WO2008107112 Jan 2009 WO
WO2010063463 Jun 2010 WO
WO2010063463 Jun 2010 WO
Related Publications (1)
Number Date Country
20080220750 A1 Sep 2008 US
Provisional Applications (1)
Number Date Country
60893114 Mar 2007 US