This invention relates to web interaction over a wireless network between wireless communication devices and an Internet application. Particularly, the present invention relates to multi-modal web interaction over wireless network, which enables users to interact with an Internet application in a variety of ways.
Wireless communication devices are becoming increasingly prevalent for personal communication needs. These devices include, for example, cellular telephones, alphanumeric pagers, “palmtop” computers, personal information managers (PIMS), and other small, primarily handheld communication and computing devices. Wireless communication devices have matured considerably in their features and now support not only basic point-to-point communication functions like telephone calling, but more advanced communications functions, such as electronic mail, facsimile receipt and transmission, Internet access and browsing of the World Wide Web, and the like.
Generally, conventional wireless communication devices have software that manages various handset functions and the telecommunications connection to the base station. The software that manages all the telephony functions is typically referred to as the telephone stack. The software that manages the output and input, such as key presses and screen display, is referred to as the user interface or Man-Machine-Interface or “MMI.
U.S. Pat. No. 6,317,781 discloses a markup language based man-machine interface. The man-machine interface provides a user interface for the various telecommunications functionality of the wireless communication device, including dialing telephone numbers, answering telephone calls, creating messages, sending messages, receiving messages, and establishing configuration settings, which are defined in a well-known markup language, such as HTML, and accessed through a browser program executed by the wireless communication device. This feature enables direct access to Internet and World Wide web content, such as web pages, to be directly integrated with telecommunication functions of the device, and allows web content to be seamlessly integrated with other types of data, because all data presented to the user via the user interface is presented via markup language-based pages. Such a markup language based man-machine interface enables users directly to interact with an Internet application.
However, unlike conventional desktop or notebook computers, wireless communication devices have a very limited input capability. Desktop or notebook computers have cursor based pointing devices, such as computer mouse, trackballs, joysticks, and the like, and full keyboards. This enables navigation of Web content by clicking and dragging of scroll bars, clicking of hypertext links, and keyboard tabbing between fields of forms, such as HTML forms. In contrast, wireless communication devices have a very limited input capability, typically up and down keys, and one to three soft keys. Thus, even with a markup language based man-machine interface, users of wireless communication devices are unable to interact with an Internet application using conventional technology. Although some forms of speech recognition exist in the prior art, there is no prior art system to realize multi-modal web interaction, which will enable users to perform web interaction over a wireless network in a variety of ways.
The features of the present invention will be more fully understood by reference to the accompanying drawings, in which:
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be appreciated by one of ordinary skill in the art that the present invention shall not be limited to these specific details.
Various embodiments of the present invention overcome the limitation of the conventional Man-Machine Interface for wireless communication by providing a system and method for multi-modal web interaction over a wireless network. The multi-modal web interaction of the present invention will enable users to interact with an Internet application in a variety of ways, including, for example:
Each of these modes can be used independently or concurrently. In one embodiment described in more detail below, the invention uses a multi-modal markup language (MML).
In one embodiment, the present invention provides an approach for web interaction over wireless network. In the embodiment, a client system receives user inputs, interprets the user inputs to determine at least one of several web interaction modes, produces a corresponding client request and transmits the client request. The server receives and interprets the client request to perform specific retrieving jobs, and transmits the result to the client system.
In one embodiment, the invention is implemented using a multi-modal markup language (MML) with DSR (Distributed Speech Recognition) mechanism, focus mechanism, synchronization mechanism and control mechanism, wherein the focus mechanism is used for determining which active display is to be focused and the ID of the focused display element. The synchronization mechanism is used for retrieving the synchronization relation between a speech element and a display element to build the grammar of corresponding speech element to deal with user's speech input. The control mechanism controls the interaction between client and server. According to such an implementation, the multi-modal web interaction flow is shown by way of example as follows:
The various embodiments of the present invention described herein provide an approach to use Distributed Speech Recognition (DSR) technology to realize multi-modal web interaction. The approach enables each of several interaction modes to be used independently or concurrently.
As a further benefit of the present invention, with the focus mechanism and synchronization mechanism, the present invention will enable the speech recognition technology to be feasibly used to retrieve information on the web, improve the precision of speech recognition, reduce the computing resources necessary for speech recognition, and realize real-time speech recognition.
As a further benefit of the present invention, with one implementation based on a multi-modal markup language, which is an extension of XML by adding speech features, the approach of the present invention can be shared across communities. The approach can be used to help Internet Service Providers (ISP) to easily build server platforms for multi-modal web interaction. The approach can be used to help Internet Content Providers (ICP) to easily create applications with the feature of multi-modal web interaction. Specifically, Multi-modal Markup Language (MML) can be used to develop speech applications on the web for at least two scenarios:
This allows content developers to re-use code for processing user input. The application logic remains the same across scenarios: the underlying application does not need to know whether the information is obtained by speech or other input methods.
Referring now to
According to an embodiment of the present invention for web interaction over wireless network, the client 14 interprets the user inputs to determine a web interaction mode, produces and transmits the client 14 request based on the interaction mode determination result; and multi-modal markup language (MML) server (gateway) 18 interprets the client 14 request to perform specific retrieving jobs. The Web interaction mode can be traditional input/output (for example: keyboard, keypad, mouse and stylus/plaintext, graphics, and motion video) or speech input/audio (synthesis speech) output. This embodiment enables users to browse the World Wide Web in a variety of ways. Specifically, users can interact with an Internet application via traditional input/output and speech input/output independently or concurrently.
In the following section, we will describe a system for web interaction over a wireless network according to one embodiment of the present invention. The reference design we give is one implementation of MML. It extends XHTML Basic by adding speech features to enhance XHTML modules. The motivation for XHTML Basic is to provide an XHTML document type that can be shared across communities. Thus, an XHTML Basic document can be presented on the maximum number of Web clients, such as mobile phones, PDAs and smart phones. That is the reason to implement MML based on XHTML Basic.
XHTML Basic Modules in One Embodiment:
Referring to
In the system 100, at client 110, web interaction mode interpreter 111 receives and interprets user inputs to determine the web interaction mode. The web interaction mode interpreter 111 also assists content interpretation in the client 110. In case of traditional web interaction, traditional input/output processor 114 processes user input, then data wrap 115 transmits a request to the server 120 for a new page or form submittal. In case of speech interaction, speech input/output processor 112 captures and extracts speech features, focus mechanism 113 determines which active display element is to be focused upon and the ID of the focused display element. Then data wrap 115 transmits the extracted speech features, the ID of the focused display element and other information such as URL of current page to the MML server. At MML server 120: web interaction mode interpreter 121 receives and interprets the request from the client 110 to determine the web interaction mode. The web interpretation mode interpreter 121 also assists content interpretation on the server 120. In case of traditional web interaction, HTTP processor 125 retrieves the new page or form from cache or web server 130. In case of speech interaction, synchronization mechanism 123 retrieves the synchronization relation between a speech element and a display element based on the received ID, dynamic grammar builder 124 builds the correct grammar based on the synchronization relation between speech element and display element. Speech recognition processor 122 performs speech recognition based on the correct grammar built by dynamic grammar builder 124. According the recognition result, HTTP processor 125 retrieves the new page from cache or web server 130. Then, data wrap 126 transmits a response to the client 110 based on the retrieved result. The control mechanisms 116 and 127 are used to control the interaction between the client and the server.
The following section is a detailed description of one embodiment of the present invention using MML with a focus mechanism, synchronization mechanism and control mechanism according to the embodiment.
Focus Mechanism
In multi-modal web interaction, besides traditional input methods, speech input can become a new input source. When using speech interaction, speech is detected and feature is extracted at the client, and speech recognition is performed at the server. We note that generally, the user will typically do input using the following types of conventional display element(s):
Considering the limitations of current speech recognition technology, in the multi-modal web interaction of the present invention, a focus mechanism is provided to focus the user's attention on the active display element(s) on which the user will perform speech input. A display element is focused by highlighting or otherwise rendering distinctive the display element upon which the user's speech input will be applied. When the identifier (ID) of the focused display element(s) is transmitted to the server, the server can perform speech recognition based on the corresponding relationship between the display element and the speech element. Therefore, instead of conventional dictation with a very large vocabulary, the vocabulary database of one embodiment is based on the hyperlinks, electronic forms, and other display elements on which users will perform speech input. At the same time, at the server, the correct grammar can be built dynamically based on the synchronization of display elements and speech elements. Therefore, the precision of speech recognition will be improved, the computing load of speech recognition will be reduced, and real-time speech recognition will actually be realized.
The MMI's of
In the conventional XHTML specification, it is not allowed to add a BUTTON out of a form. As our strategy is not to change the XHTML specification, the “Programmable Hardware Button” is adopted to focus a group of hyperlinks in one embodiment. The software button with title of “Talk to Me” is adopted to focus the electronic form display element. It will be apparent to one of ordinary skill in the art that other input means may equivalently be associated with focus for a particular display element.
When a “card” or page of a document is displayed on the display screen, no display element is initially focused. With the “Programmable Hardware Button” or “Talk to Me Button”, the user can perform web interaction through speech methods. If the user activates the “Programmable Hardware Button” or “Talk To Me Button”, the display element(s) to which the button belongs is focused. Then, possible circumstances might be as follows:
User Speech
Once a user causes focus on a particular display element, an utterance from the user is received and scored or matched against available input selections associated with the focused display element. If the scored utterance is close enough to a particular input selection, the “match” event is produced and new card or page is displayed.
The new card or page corresponds to the matched input selection. If the scored utterance cannot be matched to a particular input selection, a “no match” event is produced, audio or text prompt is displayed and the display element is still focused.
The user may also use traditional ways of causing a particular display element to be focused, such as pointing at an input area, such as a box in a form. In this case, the currently focused display element changes into un-focused as a different display element is selected.
The user may also point to a hypertext link, which causes a new card or page to be displayed. If the user points the other “Talk To Me Button”, the previous display element changes into un-focused and the display element, to which the last activation belongs, changes into focused.
If the user does not do anything for longer than the length of a pre-configured timeout, the focused display element may change into un-focused.
Synchronize Mechanism
When the user wishes to provide input on a display element through a speech methodology, the grammar of the corresponding speech elements should be loaded at the server to deal with the user's speech input. So, the synchronization or configuration scheme for the speech element and the display element is necessary. Following are two embodiments which accomplish this result.
One fundamental speech element has one grammar that includes all entrance words for one time speech interaction on the Web.
One of the fundamental speech elements must have one and only one corresponding display element(s) as follows:
Thus, it is necessary to perform a binding function to bind speech elements to corresponding display elements. In one embodiment, a “bind” attribute is defined in <mml:link>,<mml:sform>and<mml:input>. It contains the information for one pair of display elements and corresponding speech element.
The following section presents sample source code for such a binding for hyperlink type display elements.
The following section presents sample source code for a binding in an electronic form, such as an airline flight information form, for example.
Client-Server Control Mechanism
When performing multi-modal interaction, in order to signal the user agent and server that some actions have taken place, the system messages produced at the client or the server or other events produced at the client or the server should be well defined.
In an embodiment of the present invention, a Client-Server Control Mechanism is designed to provide a mechanism for the definition of the system messages and MML events which are needed to control the interaction between the client and server.
Table 1 includes a representative set of system messages and MML events.
System Messages:
The System Messages are for client and server to exchange system information. Some types of system Messages are triggered by the client and sent to the server. Others are triggered by the server and sent to the client.
In one embodiment, the System messages triggered at the client include the following:
The Session message is sent when the client initializes the connection to the server. A Ready message or an Error Message is expected to be received from the server after the Session Message is sent. Below is the example of the Session Message:
The Transmission Message (Client) is sent after the client establishes the session with the server. A Transmission Message (Server) or an Error Message is expected to be received from the server after the Transmission Message (Client) is sent. Below is an example of the Transmission message:
OnFocus and UnFocus messages are special client side System Messages.
OnFocus occurs when user points on, or presses, or otherwise activates the “Talk Button” (Here “Talk Button” means “Hardware Programmable Button” and “Talk to Me Button”). When OnFocus occurs, the client will perform the following tasks:
Below is an example of the OnFocus message to be transmitted to the server:
It is recommended that the OnFocus Message be transmitted with speech features rather than transmitted alone. The reason is to optimize and reduce unnecessary communication and server load in these cases:
“When the user switches and presses two different “Talk Buttons” in one card or on one page before entering one utterance, the client will send an unnecessary OnFocus Message to the server and will cause the server to build a grammar unnecessarily.”
But a software vendor can choose to implement the OnFocus Message as transmitted alone.
When UnFocus occurs, the client will perform the task of closing the microphone. UnFocus occurs in the following cases:
The Exit Message is sent when the client quits the session. Below is an example:
System Messages Triggered at the Server
The Ready Message is sent by the server when the client sends the Session Message first and the server is ready to work. Below is an example:
The Transmission message is sent by the server when the client sends a transmission message first or the network status has changed. This message is used to notify the client of the transmission parameters the client should use. Below is an example:
The Error message is sent by the server. If the server generates some error while processing the client request, the server will send an Error Message to the client. Below is an example:
MML Events
The purpose of MML events is to supply a flexible interface framework for handling various processing events. MML events can be categorized as client-produced events and server-produced events according to the event source. And the events might need to be communicated between the client and the server.
In the MML definition, the element of event processing instruction is <mml:onevent>. There are four types of events:
Events Trigged at Server
When speech processing results in a match, if the page developer adds the processing instructions in the handler of the “nomatch” event in the MML page,
the event is sent to the client as in the following example:
If the page developer doesn't handle the “match”, no event is sent to the client.
When speech processing results in a non-match, if the page developer adds the processing instructions in the handler of the “nomatch” event in the MML page,
the event is sent to the client as in the following example:
If the page developer doesn't handle the “nomatch”, the event is sent to the client as follows:
The “Onload” event occurs when certain display elements are loaded. This event type could only be valid when the trigger attribute is sent to the “client”. The page developer could add the processing instructions in the handler of the “Onload” event in the MML page:
No “Onload” event needs to be sent to the server.
The “Unload” event occurs when certain display elements are unloaded. This event type could only be valid when the trigger attribute is sent to the “client”. The page developer could add the processing instructions in the handler of the “Onload” event in the MML page,
No “Onload” event need be sent to the server.
MML Events Conformance
The MML Events Mechanism of one embodiment is an extension of the conventional XML Event Mechanism. As shown in
To simplify the Event mechanism, to improve efficiency, and to ease the implementation more, we developed the MML Simple Events Mechanism of one embodiment. As shown in
MML Events in the embodiment have the unified event interface with the host language (XHTML) but are independent from traditional events of the host language. Page developers can write events in the MML web page by adding a<mml:onevent>tag as the child node of an observer node or a target node.
1) Connection:
When a mismatch occurs in the above four steps, an error message will be sent to the client from the server.
2) Speech Interaction:
As described above, various embodiments of the present invention provide a focus mechanism, synchronization mechanism and control mechanism, which are implemented by MML. MML extends the XHTML Basic by adding speech feature processing.
The Following is the Detailed Explanation for Each MML Element.
Referring still to
The<Card>Element:
The function is used to divide the whole document into some cards or pages (segments). The client device will display one card at a time This is optimized for small display devices and wireless transmission. Multiple card elements may appear in a single document. Each card element represents an individual presentation or interaction with the user.
The<mml:card>element is the one and only one element of MML that has relation to the content presentation and document structure.
The optional id attribute specifies the unique identifier of the element in the scope of the whole document.
The optional title attribute specifies the string that would be displayed on the title bar of the user agent when the associated card is loaded and displayed.
The optional style attribute specifies the XHTML inline style. The effect scope of the style is the whole card. But this may be overridden by some child XHTML elements, which could define their own inline style.
The<Speech>Element:
The<mml:speech>element is the container of all speech relevant elements. The child elements of<mml:speech>can be<mml:recog>and/or<mml:prompt>.
The optional id attribute specifies the unique identifier of the element in the scope of the whole document.
The<Recog>Element:
The<mml:recog>is the container of speech recognition elements.
The optional id attribute specifies the unique identifier of the element in the scope of the whole document.
The<Group>Element:
The optional id attribute specifies the unique identifier of the element in the scope of the whole document.
The optional mode attribute specifies the speech recognition modes. Two modes are supported:
The optional accuracy attribute specifies the lowest accuracy of speech recognition that the page developers will accept. Following styles are supported:
The optional id attribute specifies the unique identifier of the element in the scope of the whole document.
The required value attribute specifies the<mml:link>element is corresponding to which part of the grammar.
The required bind attribute specifies which XHTML hyperlink (such as<a>) is to be bound with.
The<Sform>Element:
The<mml:sform>element functions as the speech input form. It should be bound with the XHTML<form>element.
The optional id attribute specifies the unique identifier of the element in the scope of the whole document.
The optional mode attribute specifies the speech recognition modes. Two modes are supported:
The optional accuracy attribute specifies the lowest accuracy of speech recognition that the page developers will accept. Following styles are supported:
The<mml:input>element functions as the speech input data placeholder. It should be bound with XHTML<input>.
The optional id attribute specifies the unique identifier of the element in the scope of the whole document.
The optional value attribute specifies which part of the speech recognition result should be assigned to the bound XHTML<input>tag. If this attribute is not set, the whole speech recognition result will be assigned to the bound XHTML<input>tag.
The required bind attribute specifies which XHTML<input>in the<form>is to be bound with.
The<Grammar>Element:
The<mml:grammar>specifies the grammar for speech recognition.
The optional id attribute specifies the unique identifier of the element in the scope of the whole document.
The optional src attribute specifies the URL of the grammar document.
If this attribute is not set, the grammar content should be in the content of<mml:grammar>
The<Prompt>Element:
The<mml:prompt>specifies the prompt message.
The optional id attribute specifies the unique identifier of the element in the scope of the whole document.
The optional type attribute specifies the prompt type. Three types are supported now in one embodiment:
If this attribute is set to “text”, the client side user agent should ignore the “loop” and “interval” attribute.
If client side user agent has no TTS engine, it may override this “type” attribute from “tts” to “text”.
The optional src attribute specifies the URL of the prompt output document.
If this attribute is not set, the prompt content should be in the content of<mml:promt>.
The optional loop attribute specifies how many times should the speech output be activated. Two modes are supported in one embodiment:
The optional interval attribute specifies the spacing time between two rounds of the speech output. It needs to be set only when the loop attribute is set to “loop”. Format:
The<mml:onevent>element is used to intercept certain events.
The user agent (both the client and server) MUST ignore any<mml:onevent> element specifying a type that does not correspond to a legal event for the immediately enclosing element. For example: the server must ignore a<mml:onevent type=“onload”>in a<mml:sform>element.
The type attribute indicates the name of the event
The optional id attribute specifies the unique identifier of the element in the scope of the whole document.
The required type attribute specifies the event type that would be handled. Following event types are supported in one embodiment:
The required trigger attribute specifies the event is desired to occur at client or server side.
The optional phase attribute specifies when the<mml:onevent>will be activated by the desired event. If user agent (including client and server) supports MML Simple Content Events Conformance, this attribute should be ignored.
<mml:onevent>should intercept the event during capture phase.
The optional propagate attribute specifies whether the intercepted event should continue propagating (XML Events Conformance). If the user agent (including client and server) supports MML Simple Content Events Conformance, this attribute should be ignored. The following modes are supported in one embodiment:
The intercepted event will continue propagating.
The intercepted event will stop propagating.
The optional defaultaction attribute specifies whether the default action for the event (if any) should be performed or not after handling this event by<mml:onevent>.
For Instance:
The default action is cancelled.
The<Do>Element:
The<mml:do>element is always a child element of an<mml:onevent>element. When the<mml:onevent>element intercepts a desired event, it will invoke the behavior specified by the contained<mml:do>element.
The optional id attribute specifies the unique identifier of the element in the scope of the whole document.
The optional target attribute specifies the id of the target element that will be invoked.
The optional href attribute specifies the URL or Script to the associated behavior. If the target attribute is set, this attribute will be ignored.
The optional action attribute specifies the action type that will be invoked on the target or URL.
The<mml:getvalue>element is a child element of<mml:prompt>. It is used to get the content from<form>or<sform>data placeholder.
The optional id attribute specifies the unique identifier of the element in the scope of the whole document.
The required at attribute specifies that the value to be assigned is at the client or the server:
1. “client” (default value)
The<mml:getvalue>get client side element value.
In this case, the from attribute should be set to a data placeholder of a<form>.
2. “server”
The process flow is as follows:
The process flow is as follows:
The following section describes the flow of client and server interaction in the system for multi-modal web interaction over wireless network according to one embodiment of the present invention.
Unlike traditional web interaction and telephony interaction, the system of the present invention supports multi-modal web interaction. Because the main speech recognition processing job is handled by the server, the multi-modal web page will be interpreted at both the client and the server side. The following is an example of the simple flow of client and server interaction using an embodiment of the present invention.
Thus, an inventive multi-modal web interaction approach with focus mechanism, synchronize mechanism and control mechanism implemented by MML is disclosed. The scope of protection of the claims set forth below is not intended to be limited to the particulars described in connection with the detailed description of various embodiments of the present invention provided herein.
This application is a continuation of, and claims priority to, co-pending U.S. application Ser. No. 10/534,661, filed on Nov. 10, 2005, which claims priority benefit International Application No. PCT/CN2002/000807, filed on Nov. 13, 2002, both herein incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
5748186 | Raman | May 1998 | A |
5819220 | Sarukkai et al. | Oct 1998 | A |
5867160 | Kraft et al. | Feb 1999 | A |
5915001 | Uppaluru | Jun 1999 | A |
5982370 | Kamper | Nov 1999 | A |
6031836 | Haserodt | Feb 2000 | A |
6101472 | Giangarra et al. | Aug 2000 | A |
6101473 | Scott et al. | Aug 2000 | A |
6185535 | Hedin et al. | Feb 2001 | B1 |
6192339 | Cox | Feb 2001 | B1 |
6298326 | Feller | Oct 2001 | B1 |
6571282 | Bowman-Amuah | May 2003 | B1 |
6597280 | Daniel | Jul 2003 | B1 |
6760697 | Neumeyer et al. | Jul 2004 | B1 |
6865258 | Polcyn | Mar 2005 | B1 |
6912581 | Johnson et al. | Jun 2005 | B2 |
6961895 | Beran et al. | Nov 2005 | B1 |
6976081 | Worger et al. | Dec 2005 | B2 |
7020611 | Gilde et al. | Mar 2006 | B2 |
7020845 | Gottfurcht et al. | Mar 2006 | B1 |
7113767 | Vaananen | Sep 2006 | B2 |
7149776 | Roy et al. | Dec 2006 | B1 |
7152203 | Gao et al. | Dec 2006 | B2 |
7170863 | Denman et al. | Jan 2007 | B1 |
7243162 | O'Neill et al. | Jul 2007 | B2 |
7366752 | Stawikowski et al. | Apr 2008 | B2 |
20020065944 | Hickey et al. | May 2002 | A1 |
20020138265 | Stevens et al. | Sep 2002 | A1 |
20020143853 | Isaac et al. | Oct 2002 | A1 |
20020165988 | Khan et al. | Nov 2002 | A1 |
20020194388 | Boloker et al. | Dec 2002 | A1 |
20030018700 | Giroti et al. | Jan 2003 | A1 |
20030023691 | Knauerhase | Jan 2003 | A1 |
20030023953 | Lucassen et al. | Jan 2003 | A1 |
20030046316 | Gergic et al. | Mar 2003 | A1 |
20030046346 | Mumick et al. | Mar 2003 | A1 |
20030071833 | Dantzig et al. | Apr 2003 | A1 |
20030083879 | Cyr et al. | May 2003 | A1 |
20030105812 | Flowers et al. | Jun 2003 | A1 |
20030154085 | Kelley | Aug 2003 | A1 |
20030158736 | James et al. | Aug 2003 | A1 |
20030182622 | Sibal et al. | Sep 2003 | A1 |
20030225825 | Healey et al. | Dec 2003 | A1 |
20040006474 | Gong et al. | Jan 2004 | A1 |
20040025115 | Sienel et al. | Feb 2004 | A1 |
20040141597 | Giacomelli | Jul 2004 | A1 |
20040202117 | Wilson et al. | Oct 2004 | A1 |
Number | Date | Country |
---|---|---|
1255194 | May 2002 | EP |
Entry |
---|
Raymond Lau, Giovanni Flammia, Christine Pao, Victor Zue, WebGALAXY: beyond point and click—a conversational interface to a browser, Computer Networks and ISDN Systems, vol. 29, Issues 8-13, Sep. 1997, pp. 1385-1393. |
First Office Action received for Chinese Patent Application No. 02829885.3, mailed May 9, 2008, 13 pages. |
PCT Search Report received for PCT/CN2002/000807, mailed Apr. 17, 2003, 6 pages. |
International Preliminary Report on Patentability for PCT/CN2002/000807, mailed Jan. 27, 2005, 4 pages. |
Supplementary European Search Report for European Patent App. No. 02782634.6-2413, mailed Jul. 29, 2011, 6 pages. |
Official Communication for European Patent Application No. 02782634.6, mailed Jan. 16, 2013, 4 pages. |
Number | Date | Country | |
---|---|---|---|
20110202342 A1 | Aug 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10534661 | US | |
Child | 12976320 | US |