1. Field of the Invention
The present invention relates to multi-media messages and more specifically to a system and method of customizing the creation and sending of multi-media messages.
2. Discussion of Related Art
There is a growing popularity for text-to-speech (“TTS”) enabled systems that combine voice with a “talking head” or a computer-generated face that literally speaks to a person. Such systems improve user experience with a computer system by personalizing the exchange of information. Systems for converting text into speech are known in the art. For example, U.S. Pat. No. 6,173,263 B1 to Alistair Conkie, assigned to the assignee of the present invention, discloses a system and method of performing concatenative speech synthesis. The contents of this patent are incorporated herein by reference.
One example associated with the creation and delivery of e-mails using a TTS system is LifeFX™'s Facemail™.
This system enables a sender to write an e-mail and choose a talking head or “face” to deliver the e-mail. The recipient of the e-mail needs to download special TTS software in order to enable the “face” to deliver the message. The downloaded software converts the typewritten e-mail from the e-mail sender into audible words, and synchronizes the head and mouth movements of the talking head to match the audibly spoken words. Various algorithms and software may be used to provide the TS function as well as the synchronization of the speech with the talking head. For example, the article, “Photo-realistic Talking-heads From Image Samples,” by E. Cosatto and H. P. Graf, IEEE Transactions on Multimedia, September 2000, Vol. 2, Issue 3, pages 152-163, describes a system for creating a realistic model of a head that can be animated and lip-synched from phonetic transcripts of text. The contents of this article are incorporated herein by reference. Such systems, when combined with TTS synthesizers, generate video animations of talking heads that resemble people. One drawback of related systems is that the synthesized voice bears no resemblance to the sender voice.
The LifeFX™ system presents the user with a plurality of faces 20 from which to choose. Once a face is chosen, the e-mail sender composes an e-mail message. Within the e-mail, the sender inserts features to increase the emotion showed by the computer-generated face when the e-mail is “read” to the e-mail recipient. For example, the following will result in the message being read with a smile at the end: “Hi, how are you today?:-)”. These indicators of emotion are called “emoticons” and may include such features as: :-( (frown); -o (wow); :-x (kiss); and ;-) (wink). The e-mail sender will type in these symbols which are translated by the system into the emotions. Therefore, after composing a message, inserting emoticons, and choosing a face, the sender sends the message. The recipient will get an e-mail with a notification that he or she has received a facemail and that they will need to download a player to hear the message.
The LifeFX™ system presents its emoticons when delivering the message in a particular way. For example, when an emoticon such as a smile is inserted in the sentence “Hi, Jonathon, :-) how are you today?” the “talking head” 22 speaks the words “Hi, Jonathan” and then stops talking and begins the smiling operation. After finishing the smile, the talking head completes the sentence “how are you today?”
The LifeFX™ system only enables the recipient to hear the message after downloading the appropriate software. There are several disadvantages to delivering multi-media messages in this manner. Such software requires a large amount of disc space and the recipient may not desire to utilize his or her space with the necessary software. Further, with viruses prevalent on the Internet, many people are naturally reluctant to download software when they are unfamiliar with its source.
What is needed in the art is a system and method of making emoticons within multi-media messages more natural for a message recipient to hear and see. Enhanced presentation of emoticons in multi-media messages will increase the user appreciation and interactive experience with the multi-media message. Furthermore, the prior art method of inserting emoticons into text is cumbersome. For example, typing a smile “:-)” requires at least three keystrokes to accomplish. Therefore, another aspect of the present invention relates to improving the ease with which a sender can choose and insert emoticons into the text of a message.
According, the present invention addresses the deficiency of emoticon presentation by starting the visual representation of an emotion a predetermined period of time prior to the location of the emoticon in the text and completes the emotion a predetermined length of time following the insertion of the emoticon. Further, the presentation of an emotion may begin and end a predetermined period of time, number of syllables, or number of words prior to the placement of the emoticon in the sentence by the message sender.
For example, in our above sentence “Hi, Jonathon, :-) how are you today?”, the smile may start one second before the smile emoticon and end one second after. The smile may also start before the word “Jonathan” and end after the word “how.” In another variation, the smile may begin two syllables before the emoticon on the “a” sound of “Jonathon” and end after “how are.” There are many variations on this arrangement and it does not always have to be symmetrical around the emoticon. In other words, it is not necessary that the sound begin “x” number of syllables/words/time before the emoticon and end “x” number of syllables/words/time after the emoticon. Mixing and matching of starting and ending effects are contemplated to maximize the presentation of a natural multi-media message.
Another embodiment of the invention relates to a method of customizing a multi-media message with emoticons, the multi-media message being created by a sender for a recipient wherein the multi-media message comprises an animated entity audibly delivering a text message. The method comprises storing emoticons related to actions associated with the animated entity, providing to a sender at least one button option for choosing emoticons to insert into the text message at a location of a cursor and upon, the sender choosing an emoticon using one of the at least one button options, inserting an emoticon into the text message at the location of the cursor, wherein when the animated entity delivers the text message, the animated entity exhibits the actions associated with the inserted emoticons. In this manner, rather than using at least three keystrokes, an emoticon may be inserted into the text, typically at a point of the cursor, by a single button click. The emoticons may comprise, for example, a wink, a smile, an affirmative animated entity motion, eyes opening and staring, eyes popping out, and nose elongation.
In yet another variation on the availability of buttons to insert emoticons, the method further comprises presenting the sender an amplitude option associated with the chosen emoticon. Upon the sender selecting an amplitude associated with the chosen emoticon, the method comprises applying the chosen amplitude to the chosen emoticon when the multi-media message is presented to the recipient. In this manner, the sender can choose a small smile or a large smile, small frown or large frown, etc.
The amplitude associated with an emoticon may be constant or change over time. A smile emoticon, for example, might create a smile of three phases, similar to a human smile with an onset, a peak and a decay phase.
In the prior art, the text of the message, with the inserted emoticons, appears awkward with the characters representing the emoticons within the text. Accordingly, another aspect of the present invention relates to inserting an icon representing an emoticon into the text message at the location of the cursor. An icon may be a small, simplified face with a representation of the emotion and may further include or display the relative amplitude of the emotion chosen by the sender. For example, an icon may be a small face with a very large smile if the sender has chosen a smile and then increased the amplitude of the smile.
The foregoing advantages of the present invention will be apparent from the following detailed description of several embodiments of the invention with reference to the corresponding accompanying drawings, in which:
a) illustrates the basic architecture of the system according to an embodiment of the present invention;
b) illustrates a low-bandwidth version of the system shown in
The present invention may be best understood with reference to the accompanying drawings and description herein. The basic system design supporting the various embodiments of the invention is first disclosed. A system comprises a TTS and an animation server to provide a multi-media message service over the Internet wherein a sender can create a multi-media message presentation delivered audibly by an animated entity.
a) illustrates a high-bandwidth architecture 60 associated with the embodiments of the invention. The system 60 delivers a hyper-text mark-up language (HTML) page through the Internet 62 (connected to a web server, not shown but embodied in the Internet 62) to a client application 64. The HTML page (shown by way of example in
The web server receives the composed multi-media message, which includes several components that are additional to a regular e-mail or instant message. For example, a multi-media message includes a designation of an animated entity for audibly delivering the message and emoticons that add emotional elements to the animated entity during the delivery of the message. The HTML page delivered to the client terminal enables the sender to manipulate various buttons and inputs to create the multi-media message.
Once the sender finishes creating the multi-media message and sends the message, the Internet 62 transmits the message text with emoticons and other chosen parameters to a text-to-speech (TTS) server 66 that communicates with an animation or face server 68 to compute and synchronize the multi-media message. The transmission of the text-to-speech data may be accomplished using such methods as those disclosed in U.S. Pat. No. 6,173,250 B1 to Kenneth Jong, assigned to the assignee of the present invention. The contents of this patent are incorporated herein by reference.
The animation server 68 receives phonemes associated with the sender message and interpreted by the TTS server 66, including the text of the subject line and other text such as the name of the sender, as well as other defined parameters or data. The animation server 68 processes the received phonemes, message text, emoticons and any other provided parameters such as background images or audio and creates an animated message that matches the audio and the emoticons. An exemplary method for producing the animated entity is disclosed in U.S. Pat. No. 5,995,119 to Cosatto et al. (“Cosatto et al.”). The Cosatto et al. patent is assigned to the assignee of the present invention and its contents are incorporated herein by reference. Cosatto et al. disclose a system and method of generating animated characters that can “speak” or “talk” received text messages. Another reference for information on generating animated sequences of animated entities is found in U.S. Pat. No. 6,122,177 to Cosatto et al. (“Cosatto et al. II”). The contents of Cosatto et al. II are incorporated herein by reference as well.
The system 60 encodes the audio and video portions of the multi-media message for streaming through a streaming audio/video server 70. In a high-bandwidth version of the present invention, as shown in
b) illustrates a low-bandwidth system 61 of the present invention. In this variation, the animation server 68 produces animation parameters that are synchronized with the audio produced from the TTS server 66. The audio and animation parameters are encoded and transmitted by the streaming server 74 over a lower bandwidth connection over the Internet 62. The streaming client 76 in this aspect of the invention differs from the streaming client 72 of
A further variation of the invention applies when the client device includes the animation or rendering software. In this case, the client device 72, 76 can receive a multi-media message e-mail, with the message declared as a specific multipurpose Internet mail extension (MIME) type, and render the animation locally without requiring access to a central server or streaming server 70, 74. In one aspect of the invention, the rendering software includes a TTS synthesizer with the usable voices. In this case, the recipient device 72, 76 receives the text (very little data) and the face model (several kb), unless it is already stored in a cache at the receiver device 72, 76. If the receiver device 72, 76 is requested to synthesize a voice different from the ones available at its TTS synthesizer, the server 74 downloads the new voice.
High quality voices typically require several megabytes of disk space. Therefore, if the voice is stored on a streaming server 74, in order to avoid the delay of the huge download, the server 74 uses a TTS synthesizer to create the audio. Then, the server 74 streams the audio and related markup information such as phonemes, stress, word-boundaries, bookmarks with emoticons, and related timestamps to the recipient. The recipient device 76 locally renders the face model using the face model and the markup information and synchronously plays the audio streamed from the server.
When the recipient receives an e-mail message associated with the multi-media message, the message is received on a client device 71 such as that shown in
The multi-media message delivery mechanism is also not limited to an e-mail system. For example, other popular forms of communication include instant messaging, bulletin boards, I Seek You (ICQ) and other messaging services. Instant messaging and the like differ from regular e-mail in that its primary focus is immediate end-user delivery. In this sense, the sender and recipient essentially become interchangeable because the messages are communicated back and forth in real time. Presence information for a user with an open session to a well-known multi-user system enables friends and colleagues to instantly communicate messages back and forth. Those of skill in the art know various architectures for simple instant messaging and presence awareness/notification. Since the particular embodiment of the instant message, bulletin board, or I Seek You (ICQ) or other messaging service is not relevant to the general principles of the present invention, no further details are provided here. Those of skill in the art will understand and be able to apply the principles disclosed herein to the particular communication application. Although the best mode and preferred embodiment of the invention relates to the e-mail context, the multi-media messages may be created and delivered via any messaging context.
For instant messaging, client sessions are established using a multicast group (more than two participants) or unicast (two participants). As part of the session description, each participant specifies the animated entity representing him. Each participant loads the animated entity of the other participants. When a participant sends a message as described for the e-mail application, this message is sent to a central server that animates the entity for the other participants to view or streams appropriate parameters (audio/animation parameters or audio/video or text/animation parameters or just text) to the participants that their client software uses to render the animated entity.
Further as shown in
In an alternate aspect of the invention, the client device 71 stores previously downloaded specific rendering software for delivering multi-media messages. As discussed above, LifeFX™ requires the recipient to download its client software before the recipient may view the message. Therefore, some of the functionality of the present invention is applied in the context of the client terminal 71 containing the necessary software for delivering the multi-media message. In this case, the animation server 68 and TTS server 66 create and synchronize the multi-media message for delivery. The multi-media message is then transmitted, preferably via e-mail, to the recipient. When the recipient opens the e-mail, an animated entity shown in the message delivery window delivers the message. The local client software runs to locally deliver the message using the animated entity.
Many web-based applications require client devices to download software on their machines, such as with the LifeFX™ system. As mentioned above, problems exist with this requirement since customers in general are reluctant and rightfully suspicious about downloading software over the Internet because of the well-known security problems such as virus contamination, Trojan horses, zombies, etc. New software installations often cause problems with the existing software or hardware on the client device. Further, many users do not have the expertise to run the installation process if it gets even slightly complicated e.g., asking about system properties, directories, etc. Further, downloading and installing software takes time. These negative considerations may prevent hesitant users from downloading the software and using the service.
Some Java-based applications are proposed as a solution for the above-mentioned problems but these are more restrictive due to security precautions and can't be used to implement all applications and there is no unified Java implementation. Therefore, users need to configure their browsers to allow Java-based program execution. As with the problems discussed above, a time-consuming download of the Java executable for each use by users who do not know if they really need or like to use the new application may prevent users from bothering with the Java-based software.
Accordingly, an aspect of the present invention includes using streaming video to demonstrate the use of a new software application. Enabling the user to preview the use of a new software application solves the above-mentioned these problems for many applications. Currently, almost all client machines have a streaming video client such as Microsoft's Mediaplayer® or Real Player®. If not, such applications can be downloaded and configured with confidence. Note that the user needs to do this only once. These streaming video receivers can be used to receive and playback video on the client's machine.
According to this aspect of the present invention, shown by way of example in
Therefore, an aspect of the present invention enables the user, before downloading rendering software for presenting multi-media messages using an animated entity, to request a preview of the multi-media message streamed to the client as a video and presented on a player such as the Microsoft Mediaplayer® or Real Player®. If the user so desires, he or she can then download the rendering software for enjoying the reception of multi-media messages.
The sender may also insert emoticons 103 into the text of the message. The system includes predefined emoticons 96, such as “:-)” for a smile, “::-)” for a head nod, “*w*” for an eye wink, and so forth. The predefined emoticons are represented either as icons or as text, such as “;-)”. As shown in
Once the sender composes the text of the message, chooses an animated entity 94, and inserts the desired emoticons 103, he or she generates the multi-media message by clicking on the generate message button 98. The animation server 68 creates an animated video of the selected animated entity 94 for audibly delivering the message. The TTS server 66 converts the text to speech as mentioned above. Emoticons 103 in the message are translated into their corresponding facial expressions such as smiles and nods. The position of an emoticon 103 in the text determines when the facial expression is executed during delivery of the message.
Execution of a particular expression preferably occurs before the specific location of the emoticon in the text. This is in contrast to the LifeFX™ system, discussed above, in which the execution of the smile emoticon in the text “Hello, Jonathan :-) how are you?” starts and ends between the words “Jonathan” and “how”. In the present invention, the expression of the emoticon begins a predefined number of words or a predefined time before the emoticon's location in the text. Furthermore, the end of the expressions of an emoticon may be a predefined number of words after the location of the emoticon in the text or a predetermined amount of time after the location of the emoticon.
For example, according to an aspect of the present invention, the smile in the sentence “Hello, Jonathan :-) how are you?” will begin after the word “Hello” and continue through the word “how” or even through the entire sentence. The animated entity in this case will be smiling while delivering most of the message—which is more natural for the recipient than having the animated entity pause while executing an expression.
Furthermore, the starting and stopping points for executing expressions will vary depending on the expression. For example, a wink typically takes a very short amount of time to perform whereas a smile may last longer. Therefore, the starting and stopping points for a wink may be defined in terms of 0.1 seconds before its location in the text to 0.5 seconds after the location of the wink emoticon in the text. In contrast, the smile emoticon's starting, stopping, and duration parameters may be defined in terms of the words surrounding the emoticons.
The group of emoticons available for choosing can include a wink, smile, frown, surprise, affirmative animated entity motion, such as a nod of the head, eyes opening and staring, eyes popping out, eyes rolling, shoulder shrug, tongue motion, embarrassment, blushing, scream, tears, kiss and nose elongation. All varieties of facial expressions and emotions are contemplated as part of the present disclosure and the particular set of emoticons is unimportant to this invention.
As the sender increases or decreases the amplitude of the inserted emoticon, the expression shown in the smile icon 106 may reflect the modified amplitude. For example, with a text emoticon in the message text (not shown), a smile that is increased in amplitude by the amplitude bar 110 becomes “:-)))”. Here the repetition of the last symbol of the emoticon is used to control the amplitude of the emoticon. Generally, the emoticon/amplitude symbols may be defined as a prefix code. Similarly, an icon emoticon 103 may reflect an increased amplitude in its appearance. The increased intensity of the emoticon may be accomplished by changing the icon from a black-on-white background to black-on-colored background (such as red or yellow) where the intensity of the background color reflects the amplitude. The amplitude of an emoticon may also be changed by other means such as by clicking the right mouse button, or its equivalent, to increase the amplitude or by clicking on the left mouse button, or its equivalent, to decreases the amplitude. In this regard, the sender can control the intensity of the emotion expressed by the animated entity to the recipient.
Further as shown in
A method of delivering a multi-media message according to an embodiment of the present invention is shown by way of example in
In some cases, the animated entity may be unable to present the selected feature or selected emoticons with their level of amplitude adjustment. If this occurs, then the system may either ignore the chosen features or simply replace the chosen feature with a replacement feature using default parameters or parameters that are most related to the chosen feature.
The animated entity is preferably a face but via either predefined animated entities or via sender-customizable animated entities, the entity may be some other object for delivering the message. Preferably, when the emoticon is inserted into the text of the message, an icon representing the emoticon is inserted. An icon or some kind of visual representation of the smile, frown, wink, or whichever emoticon was chosen, will be more pleasing to view by the sender and recipient (if the recipient chooses to view the message text).
As shown in
Another aspect of the present invention is illustrated in
For example, in the sentence “Hi there :-), how are you?” the face server 68 and TTS server 66 are programmed to begin the expression of the smile at the beginning of the word “there” and when the smile emoticon is presented in the text, the presentation of the smile will be at its peak. Then the smile will be reduced until it ends after “how”. In this example, the predefined number of words is one. However, variations include choosing not just a predefined number of words but a predefined number of syllables or including an analysis of the length of the words before and after and determining how long before or after the position of the emoticon to start and stop its presentation. The time may also vary from emoticon to emoticon.
Similarly, the amount of time before the emoticon is to start its presentation or stop its presentation after the emoticon may also be context-driven by the length of words or position in the sentence. For example, the sentence “Hi there, how may I help you :-)?” may be context-driven to start the smile at “how” but then stop the smile immediately after “:-)” since the smile was placed at the end of a sentence.
In another variation of the present invention, the method comprises providing to the sender an option to associate at least one typed word to a chosen emoticon, wherein if the sender associates at least one typed word to a chosen emoticon, each at least one typed word associated with an emoticon is associated with the presentation by the animated entity of the chosen emoticon. In this manner, the sender can further modify and control the beginning, length and ending of an emoticon presentation. The sender can associate typed words with an emoticon by underlining, coloring, highlighting, or any other means. For example, the method may comprise providing the sender an option to assign a color to the at least one typed word such that the chosen emoticon begins to be presented by the animated entity to the recipient at the first typed word with the assigned color and the chosen emoticon presentation by the animated entity ends at the last typed word with the assigned color.
In this case, in a sentence such as “Hi John, :-) are you pleased that the stock market is up?” the underlining represents the highlighting wherein the sender chooses to begin the smile at the beginning of the word “are” and to continue the smile through the word “up”. The method comprises receiving the indicated duration of the emotion and presenting the chosen duration of the emotion as the animated entity delivers the message. As mentioned above, the highlighting can occur through coloring words, underlining words, or some other means of presenting the emotion.
In another variation of the invention, providing the sender with options to highlight words to associate one or a group of words to a specific emoticon also enables the sender to include amplitude information for the presentation of the emoticon in the message. For example, the user may be given the option to underline a word or words more than once. The more times the word or words are underlined, the greater the amplitude of the presented emotion.
In yet another variation of the invention, the method enables the sender to insert start and stop signs for indicating starting and stopping points for the presentation of emotions by the animated entity. The starting and stopping points may be inserted via text or via icons. Referring momentarily to
To insure that the proper emoticon is associated with the intended start and stop signs, the web server 63 may include software instructing the server, before delivering the multi-media message to the recipient, to check the consistency with the start sign and stop sign inserted into the message text by the sender. If consistency exists with the start sign and the stop sign, wherein a single emoticon is inserted between them, the message is delivered to the recipient.
In yet another variation of the invention, the method relates to customizing a multi-media message by choosing features from a group of stored features, the multi-media message being created by a sender where text typed by the sender is presented to a recipient using an animated entity in the multi-media message. The method comprises providing to the sender at least one button option, each button option of the at least one button option associated with a feature to add to the animated entity. Upon the user choosing a feature using one of the at least one button options, the chosen feature is inserted into the text of the message, wherein as the multi-media message is delivered to the recipient, the chosen feature is presented in a visual and audible manner by the animated entity. A sample group of stored features comprises an eye color feature, a mouth protrusion feature, a skinniness feature, a fat feature and an age feature. Other features are contemplated for adjusting the features of the animated entity. Although the above description may contain specific details, they should not be construed as limiting the claims in any way. Other configurations of the described embodiments of the invention are part of the scope of this invention. For example, the present disclosure is presented in the context of delivering e-mails. However, the present invention may be applied in any communication context where an animated entity can deliver a message created from text. For example, instant messaging technology may include an option to type a message and have the message delivered by an animated face. Accordingly, the present invention may be applied in a variety of contexts. Accordingly, the appended claims and their legal equivalents should only define the invention, rather than any specific examples given.
The present application is a continuation of U.S. patent application Ser. No. 10/003,350, filed on Nov. 2, 2001 now U.S. Pat. No. 6,990,452, which claims the benefit of provisional claims priority to U.S. Patent Application No. 60/245,521 filed Nov. 3, 2000. The contents of U.S. patent application Ser. No. 10/003,350 and provisional U.S. Patent Application No. 60/245,521 are incorporated by reference herein in their entirety. The present application is related to the following U.S. patent applications: Ser. No. 10/003,094 entitled “System and Method for Sending Multi-Media Message With Customized Audio”; Ser. No. 10/003,091 entitled “System and Method for Receiving Multi-Media Messages”; Ser. No. 10/003,093 entitled “System and Method for Sending Multi-Media Messages Using Customizable Background Images”; Ser. No. 10/003,092 entitled “System and Method of Customizing Animated Entities for Use in a Multi-Media Communication Application”; Ser. No. 09/999,526 entitled “System and Method of Controlling Sound in a Multi-Media Communication Application”; Ser. No. 09/555,525 entitled “System and Method of Marketing Using a Multi-Media Communication System”; and Ser. No. 09/999,505 entitled “System and Method of Providing Multi-Cultural Multi-Media Messages.” These applications, filed concurrently herewith and commonly assigned, are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4276570 | Burson et al. | Jun 1981 | A |
4602280 | Maloomian | Jul 1986 | A |
5113493 | Crosby | May 1992 | A |
5347306 | Nitta | Sep 1994 | A |
5387178 | Moses | Feb 1995 | A |
5416899 | Poggio et al. | May 1995 | A |
5420801 | Dockter et al. | May 1995 | A |
5537662 | Sato et al. | Jul 1996 | A |
5546500 | Lyberg | Aug 1996 | A |
5555343 | Luther | Sep 1996 | A |
5613056 | Gasper et al. | Mar 1997 | A |
5630017 | Gasper et al. | May 1997 | A |
5638502 | Murata | Jun 1997 | A |
5640590 | Luther | Jun 1997 | A |
5657426 | Waters et al. | Aug 1997 | A |
5659692 | Poggio et al. | Aug 1997 | A |
5680481 | Prasad et al. | Oct 1997 | A |
5689618 | Gasper et al. | Nov 1997 | A |
5697789 | Sameth et al. | Dec 1997 | A |
5745360 | Leone et al. | Apr 1998 | A |
5818461 | Rouet et al. | Oct 1998 | A |
5826234 | Lyberg | Oct 1998 | A |
5832115 | Rosenberg | Nov 1998 | A |
5848396 | Gerace | Dec 1998 | A |
5850463 | Horii | Dec 1998 | A |
5852669 | Eleftheriadis et al. | Dec 1998 | A |
5857099 | Mitchell et al. | Jan 1999 | A |
5880731 | Liles et al. | Mar 1999 | A |
5889892 | Saito | Mar 1999 | A |
5933151 | Jayant et al. | Aug 1999 | A |
5936628 | Kitamura et al. | Aug 1999 | A |
5950163 | Matsumoto | Sep 1999 | A |
5969721 | Chen et al. | Oct 1999 | A |
5970173 | Lee et al. | Oct 1999 | A |
5970453 | Sharman | Oct 1999 | A |
5982853 | Liebermann | Nov 1999 | A |
5983190 | Trower et al. | Nov 1999 | A |
5995639 | Kado et al. | Nov 1999 | A |
6002997 | Tou | Dec 1999 | A |
6011537 | Slotznick | Jan 2000 | A |
6014634 | Scroggie et al. | Jan 2000 | A |
6014689 | Budge et al. | Jan 2000 | A |
6018744 | Mamiya et al. | Jan 2000 | A |
6018774 | Mayle et al. | Jan 2000 | A |
6044248 | Mochizuki et al. | Mar 2000 | A |
6068183 | Freeman et al. | May 2000 | A |
6069622 | Kurlander | May 2000 | A |
6075857 | Doss et al. | Jun 2000 | A |
6075905 | Herman et al. | Jun 2000 | A |
6078700 | Sarachik | Jun 2000 | A |
6088040 | Oda et al. | Jul 2000 | A |
6111590 | Boezeman et al. | Aug 2000 | A |
6122606 | Johnson | Sep 2000 | A |
6147692 | Shaw et al. | Nov 2000 | A |
6166744 | Jaszlics et al. | Dec 2000 | A |
6208359 | Yamamoto | Mar 2001 | B1 |
6215505 | Minami et al. | Apr 2001 | B1 |
6219638 | Padmanabhan et al. | Apr 2001 | B1 |
6225978 | McNeil | May 2001 | B1 |
6230111 | Mizokawa | May 2001 | B1 |
6243681 | Guji et al. | Jun 2001 | B1 |
6289085 | Miyashita et al. | Sep 2001 | B1 |
6307567 | Cohen-Or | Oct 2001 | B1 |
6324511 | Kiraly et al. | Nov 2001 | B1 |
6329994 | Gever et al. | Dec 2001 | B1 |
6332038 | Funayama et al. | Dec 2001 | B1 |
6343141 | Okada et al. | Jan 2002 | B1 |
6366286 | Hermanson | Apr 2002 | B1 |
6366949 | Hubert | Apr 2002 | B1 |
6377925 | Greene et al. | Apr 2002 | B1 |
6381346 | Erasian | Apr 2002 | B1 |
6384829 | Prevost et al. | May 2002 | B1 |
6385586 | Dietz | May 2002 | B1 |
6393107 | Ball et al. | May 2002 | B1 |
6417853 | Squires et al. | Jul 2002 | B1 |
6433784 | Merrick et al. | Aug 2002 | B1 |
6434597 | Hachiya et al. | Aug 2002 | B1 |
6449634 | Capiel | Sep 2002 | B1 |
6453294 | Dutta et al. | Sep 2002 | B1 |
6460075 | Krueger et al. | Oct 2002 | B2 |
6462742 | Rose et al. | Oct 2002 | B1 |
6466205 | Simpson et al. | Oct 2002 | B2 |
6466213 | Bickmore et al. | Oct 2002 | B2 |
6476815 | Ando | Nov 2002 | B1 |
6496868 | Krueger et al. | Dec 2002 | B2 |
6522333 | Hatlelid et al. | Feb 2003 | B1 |
6532011 | Francini et al. | Mar 2003 | B1 |
6535907 | Hachiya et al. | Mar 2003 | B1 |
6539354 | Sutton et al. | Mar 2003 | B1 |
6542936 | Mayle et al. | Apr 2003 | B1 |
6553341 | Mullaly et al. | Apr 2003 | B1 |
6606096 | Wang | Aug 2003 | B2 |
6631399 | Stanczak et al. | Oct 2003 | B1 |
6643385 | Bravomalo | Nov 2003 | B1 |
6654018 | Cosatto et al. | Nov 2003 | B1 |
6661418 | McMillan et al. | Dec 2003 | B1 |
6665860 | DeSantis et al. | Dec 2003 | B1 |
6680934 | Cain | Jan 2004 | B1 |
6766299 | Bellomo et al. | Jul 2004 | B1 |
6782431 | Mukherjee et al. | Aug 2004 | B1 |
6784901 | Harvey et al. | Aug 2004 | B1 |
6801931 | Ramesh et al. | Oct 2004 | B1 |
6833845 | Kitagawa et al. | Dec 2004 | B2 |
6919892 | Cheiky et al. | Jul 2005 | B1 |
6963839 | Ostermann et al. | Nov 2005 | B1 |
6975988 | Roth et al. | Dec 2005 | B1 |
6987535 | Matsugu et al. | Jan 2006 | B1 |
6990452 | Ostermann et al. | Jan 2006 | B1 |
7174295 | Kivimaki | Feb 2007 | B1 |
7177811 | Ostermann et al. | Feb 2007 | B1 |
7203648 | Ostermann et al. | Apr 2007 | B1 |
7203759 | Ostermann et al. | Apr 2007 | B1 |
20010019330 | Bickmore et al. | Sep 2001 | A1 |
20010049596 | Lavine et al. | Dec 2001 | A1 |
20010050681 | Keys et al. | Dec 2001 | A1 |
20010050689 | Park | Dec 2001 | A1 |
20020007276 | Rosenblatt et al. | Jan 2002 | A1 |
20020109680 | Orbanes et al. | Aug 2002 | A1 |
20020176604 | Shekhar et al. | Nov 2002 | A1 |
20020194006 | Challapali | Dec 2002 | A1 |
20030028378 | August et al. | Feb 2003 | A1 |
20030035412 | Wang et al. | Feb 2003 | A1 |
20030046160 | Paz-Pujalt et al. | Mar 2003 | A1 |
20030046348 | Pinto et al. | Mar 2003 | A1 |
20030191816 | Landress et al. | Oct 2003 | A1 |
20040018858 | Nelson | Jan 2004 | A1 |
20040091154 | Cote | May 2004 | A1 |
20050091305 | Lange et al. | Apr 2005 | A1 |
20070033259 | Wies et al. | Feb 2007 | A1 |
Number | Date | Country |
---|---|---|
849691 | Jun 1998 | EP |
849692 | Jun 1998 | EP |
1 111 883 | Dec 1999 | EP |
2003033575 | Feb 2003 | JP |
2002016482 | Mar 2002 | KR |
Number | Date | Country | |
---|---|---|---|
60245521 | Nov 2000 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10003350 | Nov 2001 | US |
Child | 11214666 | US |