1. Field of the Invention
The field of the invention is data processing, or, more specifically, methods, apparatus, and products for automatic speech recognition.
2. Description of Related Art
User interaction with applications running on small devices through a keyboard or stylus has become increasingly limited and cumbersome as those devices have become increasingly smaller. In particular, small handheld devices like mobile phones and PDAs serve many functions and contain sufficient processing power to support user interaction through multimodal access, that is, by interaction in non-voice modes as well as voice mode. Devices which support multimodal access combine multiple user input modes or channels in the same interaction allowing a user to interact with the applications on the device simultaneously through multiple input modes or channels. The methods of input include speech recognition, keyboard, touch screen, stylus, mouse, handwriting, and others. Multimodal input often makes using a small device easier.
Multimodal applications are often formed by sets of markup documents served up by web servers for display on multimodal browsers. A ‘multimodal browser,’ as the term is used in this specification, generally means a web browser capable of receiving multimodal input and interacting with users with multimodal output, where modes of the multimodal input and output include at least a speech mode. Multimodal browsers typically render web pages written in XHTML+Voice (‘X+V’). X+V provides a markup language that enables users to interact with an multimodal application often running on a server through spoken dialog in addition to traditional means of input such as keyboard strokes and mouse pointer action. Visual markup tells a multimodal browser what the user interface is look like and how it is to behave when the user types, points, or clicks. Similarly, voice markup tells a multimodal browser what to do when the user speaks to it. For visual markup, the multimodal browser uses a graphics engine; for voice markup, the multimodal browser uses a speech engine. X+V adds spoken interaction to standard web content by integrating XHTML (eXtensible Hypertext Markup Language) and speech recognition vocabularies supported by VoiceXML. For visual markup, X+V includes the XHTML standard. For voice markup, X+V includes a subset of VoiceXML. For synchronizing the VoiceXML elements with corresponding visual interface elements, X+V uses events. XHTML includes voice modules that support speech synthesis, speech dialogs, command and control, and speech grammars. Voice handlers can be attached to XHTML elements and respond to specific events. Voice interaction features are integrated with XHTML and can consequently be used directly within XHTML content.
In addition to X+V, multimodal applications also may be implemented with Speech Application Tags (‘SALT’). SALT is a markup language developed by the Salt Forum. Both X+V and SALT are markup languages for creating applications that use voice input/speech recognition and voice output/speech synthesis. Both SALT applications and X+V applications use underlying speech recognition and synthesis technologies or ‘speech engines’ to do the work of recognizing and generating human speech. As markup languages, both X+V and SALT provide markup-based programming environments for using speech engines in an application's user interface. Both languages have language elements, markup tags, that specify what the speech-recognition engine should listen for and what the synthesis engine should ‘say.’ Whereas X+V combines XHTML, VoiceXML, and the XML Events standard to create multimodal applications, SALT does not provide a standard visual markup language or eventing model. Rather, it is a low-level set of tags for specifying voice interaction that can be embedded into other environments. In addition to X+V and SALT, multimodal applications may be implemented in Java with a Java speech framework, in C++, for example, and with other technologies and in other environments as well.
Current lightweight voice solutions require a developer to build a grammar and lexicon to limit the potential number of words that an automated speech recognition (‘ASR’) engine must recognize—as a means for increasing accuracy. Pervasive devices have limited interaction and input modalities due to the form factor of the device, and kiosk devices have limited interaction and input modalities by design. In both cases the use of speaker independent voice recognition is implemented to enhance the user experience and interaction with the device. The state of the art in speaker independent recognition allows for some sophisticated voice applications to be written as long as there is a limited vocabulary associated with each potential voice command. For example, if the user is prompted to speak the name of a city the system can, with a good level of confidence, recognize the name of the city spoken.
Computer applications that employ speech user interface with finite state grammars need to be able to build those grammars dynamically based on the user's interaction with the application. Dynamically built grammars can use the current context of the application to build a grammar that is smaller and more apropos to the context, resulting in higher performance and increased accuracy of speech recognition. An example of this principle would be an application that prompts for a user's home location, including city and state. By asking for state first, the application can dynamically build a city grammar consisting only of the cities in the state chose first by the user.
This pattern of user interaction, however, suffers from being stilted and unnatural owing to the two steps of first obtaining state and then the city. The more natural interaction is to allow the user to say “Boca Raton Florida” and recognize both city and state from a single utterance. Depending on the application, however, the static grammar required to support the single utterance may be larger than can be supported by the available computing resources. Building a grammar to support recognition of a city and state from a single utterance, for example, may require building a grammar containing all the cities in the United States, a grammar may be too voluminous for use on many multimodal devices.
Methods, apparatus, and computer program products are described for automatic speech recognition, the method implemented with a speech recognition grammar of a multimodal application in an automatic speech recognition (‘ASR’) engine with the multimodal application operating on a multimodal device supporting multiple modes of user interaction with the multimodal application, the modes of user interaction including a voice mode and one or more non-voice modes, the multimodal application operatively coupled to the ASR engine, including: matching by the ASR engine at least one static rule of the speech recognition grammar with at least one word of a voice utterance, yielding at least one matched value, the matched value specified by the grammar to be required for processing of a dynamic rule of the grammar; and dynamically defining at run time the dynamic rule of the grammar as a new static rule in dependence upon the matched value, the dynamic rule comprising a rule that is specified by the grammar as a rule that is not to be processed by the ASR until after the at least one static rule has been matched.
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular descriptions of exemplary embodiments of the invention as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts of exemplary embodiments of the invention.
Exemplary methods, apparatus, and products for automatic speech recognition according to embodiments of the present invention are described with reference to the accompanying drawings, beginning with
The system of
The grammar (104) in the example of
contains three rules named respectively <command>, <name>, and <when>. The elements <name> and <when> inside the <command> rule are references to the rules named <name> and <when>. Such rule references require that the referenced rules must be matched by an ASR engine in order for the referring rule to be matched. In this example, therefore, the <name> rule and the <when> rule must both be matched by an ASR engine with speech from a user utterance in order for the <command> rule to be matched. The rules just above are ‘static’ grammar rules, that are in this example at least, equivalent to traditional rules of a voice recognition grammar. According to embodiments of the present invention, however, ‘static’ rules, unlike traditional grammar rules, can also include dynamic rule references. Also according to embodiments of the present invention, a grammar may contain dynamic rules, rules that are specified by the grammar as a rule that is not to be processed by the ASR until after the at least one static rule has been matched. Such dynamic rules are dynamically defined at run time as a new static rule in dependence upon a matched value of a previously matched static rule. The following grammar, for example:
uses a double-bracket syntax and a parameter list to specify the <<brand>> rule as a dynamic rule that is not to be processed by an ASR until after the <item> rule has been matched. In this <<brand>> example, the static <command> rule contains a rule reference:
The dynamic <<brand>> rule is initially defined in this example grammar only by a URL:
The URL identifies a computer resource capable of dynamically defining at run time the dynamic <<brand>> rule of the grammar as a new static rule in dependence upon matched value from the <item> rule, the rule required to be matched before the dynamic rule is processed. In this example, the computer resource so identified is a Java Server Page (‘JSP’) located at http://groceries.com. The JSP is a computer resource that is programmed to define the dynamic <<brand>> rule of the grammar as a new static rule in dependence upon matched value from the <item> rule. The ASR engine expands the definition of the <<brand>> rule with the results of the match from the <item> rule and provides the expansion to the JSP page to return a new static rule. In this way, the ASR engine may dynamically define the dynamic rule at run time as a new static rule by expanding the definition of the dynamic rule with a matched value of the referenced static <item> rule. If the <item> rule were matched with “peppers,” for example, then the definition of the dynamic <<brand>> rule may be expanded as:
And the new static rule returned from the JSP page may be, for example:
If the <item> rule were matched with “tomatoes,” for example, then the definition of the dynamic <<brand>> rule may be expanded as:
And the new static rule returned from the JSP page may be, for example:
If the <item> rule were matched with “toothpaste,” for example, then the definition of the dynamic <<brand>> rule may be expanded as:
And the new static rule returned from the JSP page may be, for example:
And so on—with a different definition of the new static rule possible for each matched value of the referenced static <item> rule.
Note that in this example, the dynamic <<brand>> rule occurs in document order after the static <item> rule whose match value is required before the dynamic rule can be processed. In this example, the ASR engine typically will match the <item> rule in document order before processing the <<brand>> rule. This document order, however, is not a limitation of the present invention. The static and dynamic rules may occur in any document order in the grammar, and, if a dynamic rule is set forth in the grammar ahead of a static rule upon which the dynamic rule depends, then the ASR engine is configured to make more than one pass through the grammar, treating the dynamic rule in the meantime as a rule that matches any speech in the utterance until a next rule match, a next token match, or the end of processing of the pertinent user utterance.
A multimodal device is an automated device, that is, automated computing machinery or a computer program running on an automated device, that is capable of accepting from users more than one mode of input, keyboard, mouse, stylus, and so on, including speech input—and also displaying more than one mode of output, graphic, speech, and so on. A multimodal device is generally capable of accepting speech input from a user, digitizing the speech, and providing digitized speech to a speech engine for recognition. A multimodal device may be implemented, for example, as a voice-enabled browser on a laptop, a voice browser on a telephone handset, an online game implemented with Java on a personal computer, and with other combinations of hardware and software as may occur to those of skill in the art. Because multimodal applications may be implemented in markup languages (X+V, SALT), object-oriented languages (Java, C++), procedural languages (the C programming language), and in other kinds of computer languages as may occur to those of skill in the art, this specification uses the term ‘multimodal application’ to refer to any software application, server-oriented or client-oriented, thin client or thick client, that administers more than one mode of input and more than one mode of output, typically including visual and speech modes.
The system of
Each of the example multimodal devices (152) in the system of
As mentioned, a multimodal device according to embodiments of the present invention is capable of providing speech to a speech engine for recognition. A speech engine is a functional module, typically a software module, although it may include specialized hardware also, that does the work of recognizing and generating or ‘synthesizing’ human speech. The speech engine implements speech recognition by use of a further module referred to in this specification as a ASR engine, and the speech engine carries out speech synthesis by use of a further module referred to in this specification as a text-to-speech (‘TTS’) engine. As shown in
A multimodal application (195) in this example provides speech for recognition and text for speech synthesis to a speech engine through a VoiceXML interpreter (149, 155). A VoiceXML interpreter is a software module of computer program instructions that accepts voice dialog instructions from a multimodal application, typically in the form of a VoiceXML <form> element. The voice dialog instructions include one or more grammars, data input elements, event handlers, and so on, that advise the VoiceXML interpreter how to administer voice input from a user and voice prompts and responses to be presented to a user. The VoiceXML interpreter administers such dialogs by processing the dialog instructions sequentially in accordance with a VoiceXML Form Interpretation Algorithm (‘FIA’).
As shown in
The VoiceXML interpreter provides grammars, speech for recognition, and text prompts for speech synthesis to the speech engine, and the VoiceXML interpreter returns to the multimodal application speech engine output in the form of recognized speech, semantic interpretation results, and digitized speech for voice prompts. In a thin client architecture, the VoiceXML interpreter (155) is located remotely from the multimodal client device in a voice server (151), the API for the VoiceXML interpreter is still implemented in the multimodal device, with the API modified to communicate voice dialog instructions, speech for recognition, and text and voice prompts to and from the VoiceXML interpreter on the voice server. For ease of explanation, only one (107) of the multimodal devices (152) in the system of
The use of these four example multimodal devices (152) is for explanation only, not for limitation of the invention. Any automated computing machinery capable of accepting speech from a user, providing the speech digitized to an ASR engine through a VoiceXML interpreter, and receiving and playing speech prompts and responses from the VoiceXML interpreter may be improved to function as a multimodal device for automatic speech recognition according to embodiments of the present invention.
The system of
The system of
The system of
The arrangement of the multimodal devices (152), the web server (147), the voice server (151), and the data communications network (100) making up the exemplary system illustrated in
Automatic speech recognition according to embodiments of the present invention in a thin client architecture may be implemented with one or more voice servers, computers, that is, automated computing machinery, that provide speech recognition and speech synthesis. For further explanation, therefore,
Stored in RAM (168) is a voice server application (188), a module of computer program instructions capable of operating a voice server in a system that is configured to carry out automatic speech recognition according to embodiments of the present invention. Voice server application (188) provides voice recognition services for multimodal devices by accepting requests for speech recognition and returning speech recognition results, including text representing recognized speech, text for use as variable values in dialogs, and text as string representations of scripts for semantic interpretation. Voice server application (188) also includes computer program instructions that provide text-to-speech (‘TTS’) conversion for voice prompts and voice responses to user input in multimodal applications such as, for example, X+V applications, SALT applications, or Java Speech applications. Voice server application (188) may be implemented as a web server, implemented in Java, C++, or another language, that supports X+V, SALT, VoiceXML, or other multimodal languages, by providing responses to HTTP requests from X+V clients, SALT clients, Java Speech clients, or other multimodal clients. Voice server application (188) may, for a further example, be implemented as a Java server that runs on a Java Virtual Machine (102) and supports a Java voice framework by providing responses to HTTP requests from Java client applications running on multimodal devices. And voice server applications that support automatic speech recognition may be implemented in other ways as may occur to those of skill in the art, and all such ways are well within the scope of the present invention.
The voice server (151) in this example includes a speech engine (153). The speech engine is a functional module, typically a software module, although it may include specialized hardware also, that does the work of recognizing and generating human speech. The speech engine (153) includes an automated speech recognition (‘ASR’) engine for speech recognition and a text-to-speech (‘TTS’) engine for generating speech. The speech engine also includes a grammar (104), a lexicon (106), and a language-specific acoustic model (108). The language-specific acoustic model (108) is a data structure, a table or database, for example, that associates SFVs with phonemes representing, to the extent that it is practically feasible to do so, all pronunciations of all the words in a human language. The lexicon (106) is an association of words in text form with phonemes representing pronunciations of each word; the lexicon effectively identifies words that are capable of recognition by an ASR engine. Also stored in RAM (168) is a Text To Speech (‘TTS’) Engine (194), a module of computer program instructions that accepts text as input and returns the same text in the form of digitally encoded speech, for use in providing speech as prompts for and responses to users of multimodal systems.
The grammar (104) communicates to the ASR engine (150) the words and sequences of words that currently may be recognized. For precise understanding, distinguish the purpose of the grammar and the purpose of the lexicon. The lexicon associates with phonemes all the words that the ASR engine can recognize. The grammar communicates the words currently eligible for recognition. The set of words currently eligible for recognition and the set of words capable of recognition may or may not be the same.
Grammars for use in automatic speech recognition according to embodiments of the present invention may be expressed in any format supported by any ASR engine, including, for example, the Java Speech Grammar Format (‘JSGF’), the format of the W3C Speech Recognition Grammar Specification (‘SRGS’), the Augmented Backus-Naur Format (‘ABNF’) from the IETF's RFC2234, in the form of a stochastic grammar as described in the W3C's Stochastic Language Models (N-Gram) Specification, and in other grammar formats as may occur to those of skill in the art. Grammars typically operate as elements of dialogs, such as, for example, a VoiceXML <menu> or an X+V<form>. A grammar's definition may be expressed in-line in a dialog. Or the grammar may be implemented externally in a separate grammar document and referenced from with a dialog with a URI. Here is an example of a grammar expressed in JSGF:
In this example, the elements named <command>, <name>, and <when> are rules of the grammar. Rules are a combination of a rulename and an expansion of a rule that advises an ASR engine or a voice interpreter which words presently can be recognized. In this example, expansion includes conjunction and disjunction, and the vertical bars ‘|’ mean ‘or.’ An ASR engine or a voice interpreter processes the rules in sequence, first <command>, then <name>, then <when>. The <command> rule accepts for recognition ‘call’ or ‘phone’ or ‘telephone’ plus, that is, in conjunction with, whatever is returned from the <name> rule and the <when> rule. The <name> rule accepts ‘bob’ or ‘martha’ or ‘joe’ or ‘pete’ or ‘chris’ or ‘john’ or ‘artoush’, and the <when> rule accepts ‘today’ or ‘this afternoon’ or ‘tomorrow’ or ‘next week.’ The command grammar as a whole matches utterances like these, for example:
The voice server application (188) in this example is configured to receive, from a multimodal client located remotely across a network from the voice server, digitized speech for recognition from a user and pass the speech along to the ASR engine (150) for recognition. ASR engine (150) is a module of computer program instructions, also stored in RAM in this example. In carrying out automated speech recognition, the ASR engine receives speech for recognition in the form of at least one digitized word and uses frequency components of the digitized word to derive a Speech Feature Vector (‘SFV’). An SFV may be defined, for example, by the first twelve or thirteen Fourier or frequency domain components of a sample of digitized speech. The ASR engine can use the SFV to infer phonemes for the word from the language-specific acoustic model (108). The ASR engine then uses the phonemes to find the word in the lexicon (106).
Also stored in RAM is a VoiceXML interpreter (192), a module of computer program instructions that processes VoiceXML grammars. VoiceXML input to VoiceXML interpreter (192) may originate, for example, from VoiceXML clients running remotely on multimodal devices, from X+V clients running remotely on multimodal devices, from SALT clients running on multimodal devices, or from Java client applications running remotely on multimedia devices. In this example, VoiceXML interpreter (192) interprets and executes VoiceXML segments representing voice dialog instructions received from remote multimedia devices and provided to VoiceXML interpreter (192) through voice server application (188).
A multimodal application (195) in a thin client architecture may provide voice dialog instructions, VoiceXML segments, VoiceXML <form> elements, and the like, to VoiceXML interpreter (149) through data communications across a network with multimodal application (195). The voice dialog instructions include one or more grammars, data input elements, event handlers, and so on, that advise the VoiceXML interpreter how to administer voice input from a user and voice prompts and responses to be presented to a user. The VoiceXML interpreter administers such dialogs by processing the dialog instructions sequentially in accordance with a VoiceXML Form Interpretation Algorithm (‘FIA’). The VoiceXML interpreter interprets VoiceXML dialogs provided to the VoiceXML interpreter by a multimodal application.
The speech recognition grammar (104) in the example of
The static rule (520) contains a dynamic rule reference (524) that has a static rule parameter list (526) containing at least one static rule reference (528). The dynamic rule reference (524) identifies a dynamic rule (518) that must be matched in order for the static rule (520) to be matched. The at least one static rule reference (528) identifies at least one static rule that must be matched before the dynamic rule may be processed for a match by the ASR engine (150). In this example static rule:
The ASR engine (150) of
is “peppers” when matched from a user utterance for the <item> rule. The matched value is specified by the grammar to be required for processing of the dynamic <<brand>> rule by the coding of the dynamic rule reference <<brand>>(<item>) in the static <command> rule.
The process of dynamically defining the dynamic rule as a new static rule may be carried out by the ASR engine (150) operating in conjunction with another computer resource that is specified in the definition of the dynamic rule (518) as a computer resource that is capable of dynamically defining at run time the dynamic rule of the grammar as a new static rule in dependence upon the matched value. In the exampleGrammar set forth above, the definition of the dynamic <<brand>> rule specifies the resource located by the URL http://groceries.com/brand.jsp as a computer resource that is capable of dynamically defining at run time the dynamic rule of the grammar as a new static rule in dependence upon a matched value of a static rule.
Also stored in RAM (168) is an operating system (154). Operating systems useful in voice servers according to embodiments of the present invention include UNIX™, Linux™, Microsoft NT™, AIX™, IBM's i5/OS™, and others as will occur to those of skill in the art. Operating system (154), voice server application (188), VoiceXML interpreter (192), ASR engine (150), JVM (102), and TTS Engine (194) in the example of
Voice server (151) of
Voice server (151) of
The example voice server of
The exemplary voice server (151) of
For further explanation,
In addition to the multimodal sever application (188), the voice server (151) also has installed upon it a speech engine (153) with an ASR engine (150), a grammar (104), a lexicon (106), a language-specific acoustic model (108), and a TTS engine (194), as well as a JVM (102), and a Voice XML interpreter (192). VoiceXML interpreter (192) interprets and executes VoiceXML dialog instructions received from the multimodal application and provided to VoiceXML interpreter (192) through voice server application (188). VoiceXML input to VoiceXML interpreter (192) may originate from the multimodal application (195) implemented as an X+V client running remotely on the multimodal device (152). As noted above, the multimedia device application (195) also may be implemented as a Java client application running remotely on the multimedia device (152), a SALT application running remotely on the multimedia device (152), and in other ways as may occur to those of skill in the art.
VOIP stands for ‘Voice Over Internet Protocol,’ a generic term for routing speech over an IP-based data communications network. The speech data flows over a general-purpose packet-switched data communications network, instead of traditional dedicated, circuit-switched voice transmission lines. Protocols used to carry voice signals over the IP data communications network are commonly referred to as ‘Voice over IP’ or ‘VOIP’ protocols. VOIP traffic may be deployed on any IP data communications network, including data communications networks lacking a connection to the rest of the Internet, for instance on a private building-wide local area data communications network or ‘LAN.’
Many protocols are used to effect VOIP. The two most popular types of VOIP are effected with the IETF's Session Initiation Protocol (‘SIP’) and the ITU's protocol known as ‘H.323.’ SIP clients use TCP and UDP port 5060 to connect to SIP servers. SIP itself is used to set up and tear down calls for speech transmission. VOIP with SIP then uses RTP for transmitting the actual encoded speech. Similarly, H.323 is an umbrella recommendation from the standards branch of the International Telecommunications Union that defines protocols to provide audio-visual communication sessions on any packet data communications network.
The apparatus of
Voice server application (188) provides voice recognition services for multimodal devices by accepting dialog instructions, VoiceXML segments, and returning speech recognition results, including text representing recognized speech, text for use as variable values in dialogs, and output from execution of semantic interpretation scripts as well as voice prompts. Voice server application (188) includes computer program instructions that provide text-to-speech (‘TTS’) conversion for voice prompts and voice responses to user input in multimodal applications such as, for example, X+V applications, SALT applications, or Java Speech applications.
The voice server application (188) receives speech for recognition from a user and passes the speech through API calls to VoiceXML interpreter (192) which in turn uses an ASR engine (150) for speech recognition. The ASR engine receives digitized speech for recognition, uses frequency components of the digitized speech to derive an SFV, uses the SFV to infer phonemes for the word from the language-specific acoustic model (108), and uses the phonemes to find the speech in the lexicon (106). The ASR engine then compares speech found as words in the lexicon to words in a grammar (104) to determine whether words or phrases in speech are recognized by the ASR engine.
The speech recognition grammar (104) in the example of
The static rule (520) contains a dynamic rule reference (524) that has a static rule parameter list (526) containing at least one static rule reference (528). The dynamic rule reference (524) identifies a dynamic rule (518) that must be matched in order for the static rule (520) to be matched. The at least one static rule reference (528) identifies at least one static rule that must be matched before the dynamic rule may be processed for a match by the ASR engine (150). In this example static rule:
The ASR engine (150) of
is “peppers” when matched from a user utterance for the <item> rule. The matched value is specified by the grammar to be required for processing of the dynamic <<brand>> rule by the coding of the dynamic rule reference <<brand>>(<item>) in the static <command> rule.
The process of dynamically defining the dynamic rule as a new static rule may be carried out by the ASR engine (150) operating in conjunction with another computer resource that is specified in the definition of the dynamic rule (518) as a computer resource that is capable of dynamically defining at run time the dynamic rule of the grammar as a new static rule in dependence upon the matched value. In the exampleGrammar set forth above, the definition of the dynamic <<brand>> rule specifies the resource located by the URL http://groceries.com/brand.jsp as a computer resource that is capable of dynamically defining at run time the dynamic rule of the grammar as a new static rule in dependence upon a matched value of a static rule.
The multimodal application (195) is operatively coupled to the ASR engine (150). In this example, the operative coupling between the multimodal application and the ASR engine (150) is implemented with a VOIP connection (216) through a voice services module (130), then through the voice server application (188) and either JVM (102), VoiceXML interpreter (192), or SALT interpreter (103), depending on whether the multimodal application is implemented in X+V, Java, or SALT. The voice services module (130) is a thin layer of functionality, a module of computer program instructions, that presents an API (316) for use by an application level program in providing dialog instructions and speech for recognition to a voice server application (188) and receiving in response voice prompts and other responses. In this example, application level programs are represented by multimodal application (195), JVM (101), and multimodal browser (196).
The voice services module (130) provides data communications services through the VOIP connection and the voice server application (188) between the multimodal device (152) and the VoiceXML interpreter (192). The API (316) is the same API presented to applications by a VoiceXML interpreter when the VoiceXML interpreter is installed on the multimodal device in a thick client architecture (316 on
Automatic speech recognition according to embodiments of the present invention in thick client architectures is generally implemented with multimodal devices, that is, automated computing machinery or computers. In the system of
The example multimodal device (152) of
The speech engine (153) in this kind of embodiment, a thick client architecture, often is implemented as an embedded module in a small form factor device such as a handheld device, a mobile phone, PDA, and the like. An example of an embedded speech engine useful for automatic speech recognition according to embodiments of the present invention is IBM's Embedded ViaVoice Enterprise. The example multimodal device of
Also stored in RAM (168) in this example is a multimodal application (195), a module of computer program instructions capable of operating a multimodal device as an apparatus that supports automatic speech recognition according to embodiments of the present invention. The multimodal application (195) implements speech recognition by accepting speech for recognition from a user and sending the speech for recognition through API calls to the ASR engine (150). The multimodal application (195) implements speech synthesis generally by sending words to be used as prompts for a user to the TTS engine (194). As an example of thick client architecture, the multimodal application (195) in this example does not send speech for recognition across a network to a voice server for recognition, and the multimodal application (195) in this example does not receive synthesized speech, TTS prompts and responses, across a network from a voice server. All grammar processing, voice recognition, and text to speech conversion in this example is performed in an embedded fashion in the multimodal device (152) itself.
More particularly, multimodal application (195) in this example is a user-level, multimodal, client-side computer program that provides a speech interface through which a user may provide oral speech for recognition through microphone (176), have the speech digitized through an audio amplifier (185) and a coder/decoder (‘codec’) (183) of a sound card (174) and provide the digitized speech for recognition to ASR engine (150). The multimodal application (195) may be implemented as a set or sequence of X+V documents executing in a multimodal browser (196) or microbrowser that passes VoiceXML grammars and digitized speech by calls through an API (316) directly to an embedded VoiceXML interpreter (192) for processing. The embedded VoiceXML interpreter (192) may in turn issue requests for speech recognition through API calls directly to the embedded ASR engine (150). Multimodal application (195) also can provide speech synthesis, TTS conversion, by API calls to the embedded TTS engine (194) for voice prompts and voice responses to user input.
In a further class of exemplary embodiments, the multimodal application (195) may be implemented as a Java voice application that executes on Java Virtual Machine (102) and issues calls through the VoiceXML API (316) for speech recognition and speech synthesis services. In further exemplary embodiments, the multimodal application (195) may be implemented as a set or sequence of SALT documents executed on a multimodal browser (196) or microbrowser that issues calls through the VoiceXML API (316) for speech recognition and speech synthesis services. In addition to X+V, SALT, and Java implementations, multimodal application (195) may be implemented in other technologies as will occur to those of skill in the art, and all such implementations are well within the scope of the present invention.
The speech recognition grammar (104) in the example of
The static rule (520) contains a dynamic rule reference (524) that has a static rule parameter list (526) containing at least one static rule reference (528). The dynamic rule reference (524) identifies a dynamic rule (518) that must be matched in order for the static rule (520) to be matched. The at least one static rule reference (528) identifies at least one static rule that must be matched before the dynamic rule may be processed for a match by the ASR engine (150). In this example static rule:
The ASR engine (150) of
is “peppers” when matched from a user utterance for the <item> rule. The matched value is specified by the grammar to be required for processing of the dynamic <<brand>> rule by the coding of the dynamic rule reference <<brand>>(<item>) in the static <command> rule.
The process of dynamically defining the dynamic rule as a new static rule may be carried out by the ASR engine (150) operating in conjunction with another computer resource that is specified in the definition of the dynamic rule (518) as a computer resource that is capable of dynamically defining at run time the dynamic rule of the grammar as a new static rule in dependence upon the matched value. In the exampleGrammar set forth above, the definition of the dynamic <<brand>> rule specifies the resource located by the URL http://groceries.com/brand.jsp as a computer resource that is capable of dynamically defining at run time the dynamic rule of the grammar as a new static rule in dependence upon a matched value of a static rule.
The multimodal application (195) is operatively coupled to the ASR engine (150). In this example, the operative coupling between the multimodal application and the ASR engine (150) is implemented either JVM (102), VoiceXML interpreter (192), or SALT interpreter (103), depending on whether the multimodal application is implemented in X+V, Java, or SALT. When the multimodal application (195) is implemented in X+V, the operative coupling is effected through the multimodal browser (196), which provides an operating environment and an interpreter for the X+V application, and then through the VoiceXML interpreter, which passes grammars and voice utterances for recognition to the ASR engine. When the multimodal application (195) is implemented in Java Speech, the operative coupling is effected through the JVM (102), which provides an operating environment for the Java application and passes grammars and voice utterances for recognition to the ASR engine. When the multimodal application (195) is implemented in SALT, the operative coupling is effected through the SALT interpreter (103), which provides an operating environment and an interpreter for the X+V application and passes grammars and voice utterances for recognition to the ASR engine.
The multimodal application (195) in this example, running on a multimodal device (152) that contains its own VoiceXML interpreter (192) and its own speech engine (153) with no network or VOIP connection to a remote voice server containing a remote VoiceXML interpreter or a remote speech engine, is an example of a so-called ‘thick client architecture,’ so-called because all of the functionality for processing voice mode interactions between a user and the multimodal application—as well as the functionality for automatic speech recognition with dynamic grammar rules according to embodiments of the present invention—is implemented on the multimodal device itself.
For further explanation,
The multimodal application is operatively coupled (524) to the ASR engine (150). The operative coupling (524) provides a data communications path (504) from the multimodal application (195) to the ASR engine for speech recognition grammars. The operative coupling (524) provides a data communications path (506) from the ASR engine (150) to the multimodal application (195) for recognized speech and semantic interpretation results. The operative coupling may be effected with a JVM (102 on
The method of
the syntax <<brand>>(<item>) is a specification by the grammar that a matched value of the <item> rule is required for processing of the dynamic <<brand>> rule of exampleGrammar.
In the method of
The method of
In the method of
If the <item> rule were matched with “tomatoes,” for example, then the definition of the dynamic <<brand>> rule may be expanded as:
If the <item> rule were matched with “toothpaste,” for example, then the definition of the dynamic <<brand>> rule may be expanded as:
And the new static rule returned from the JSP may be, for example:
And so on—with a different definition of the new static rule possible for each matched value of the referenced static <item> rule.
The examples just above describe a JSP as a computer resource capable of dynamically defining at run time the dynamic rule of the grammar as a new static rule in dependence upon a matched value, but the JSP is used only for explanation, not as a limitation of the present invention. A JSP is a dynamic server page, and other forms of dynamic server page may be used as a computer resource capable of dynamically defining at run time the dynamic rule of the grammar as a new static rule in dependence upon a matched value: Active Server Pages (‘ASPs’), Common Gateway Interface (‘CGI’) scripts, PHP Hypertext Processor (‘PHP’) scripts, and so on. When a multimodal application is implemented as a Java application in a Java speech framework, then a computer resource capable of dynamically defining at run time the dynamic rule of the grammar as a new static rule may be implemented as a Java callback function. Computer resource capable of dynamically defining at run time the dynamic rule of the grammar as a new static rule may include dynamically linked native functions or local executable programs, with arguments marshalled according to the particular method. In addition, ECMAScripts in X+V pages may be used as computer resources capable of dynamically defining at run time the dynamic rule of the grammar as a new static rule in dependence upon the matched value, as in:
And so on, so that many computer resources may be used for dynamically defining at run time the dynamic rule of the grammar as a new static rule in dependence upon a matched value, as will occur to those of skill in the art.
As mentioned earlier, a static rule parameter list may contain more than one static rule reference. If a dynamic rule reference in a static rule has more then one static rule reference, <rule1>, <rule2>, and so on, in its static rule parameter list, then the definition dynamic rule of the dynamic rule may be expanded, for example, as:
with one expansion term for each referenced static rule, and a new static rule may be generated for each static rule reference in the static rule parameter list using each match from each static rule referenced in the static rule parameter list. In the exampleGrammar, if the static rule parameter list for the <<brand>> reference contained more than one static rule reference, such as, for example:
then the definition of the dynamic <<brand>> rule may be expanded as:
and the new static rule returned from the JSP may in fact be a return of several static rules, one for each static rule reference in the static rule parameter list:
In the method of
sets forth two references to the dynamic <<brand>> rule:
In the method of
Also in exampleGrammar, the static rule referenced in the static rule parameter list (<item>) occurs in document order in the grammar after the dynamic rule that depends on the referenced static rule for processing by the ASR engine. So the ASR engine in this example has two reasons to treat the dynamic <<brand>> rule as continuing to match speech from the utterance currently being processed:
In a traditional ASR engine, prior to this invention, the ASR engine, upon encountering a rule, such as the dynamic <<brand>> rule, that cannot be matched in document order, would stop processing and report a failure to match the grammar. The report could be an exception thrown in a Java environment, a <nomatch> event in X+V, and so on. In this example, however, an ASR engine according to embodiments of the present invention, upon encountering the dynamic <<brand>> rule, continues processing the grammar by treating the dynamic <<brand>> rule as a rule that matches any speech in the utterance currently processing until a next rule match, a next token match, or the end of processing of the utterance. A ‘token’ is a terminal grammar element. In the exampleGrammar, “add,” “to,” “my,” “shopping,” and “list” are tokens. So the ASR engine, upon encountering the dynamic <<brand>> rule, in effect, continues processing by matching arbitrary spoken input from the utterance (530) up to the next matched token, next matched rule, or until the ASR engine runs out of words in the utterance. The ASR engine may carry out such matching of arbitrary spoken input by treating arbitrary spoken input from the utterance as provisionally or temporarily generating a match value of NULL, for example. When the ASR engine does match a next matched token or rule of the grammar with spoken input from the utterance, the ASR engine continues processing by looping back through the grammar to again visit the dynamic <<brand>> rule.
Now the ASR engine knows from its encounter with the static <command> rule that processing the dynamic <<brand>> rule requires a matched value from the static <item> rule. If the ASR engine has such a matched value on its next loop through the grammar rules, the ASR engine can expand the definition of the <<brand>> rule and use the computer resource http://groceries.com/brand.jsp identified in the dynamic <<brand>> rule to dynamically define at run time the dynamic <<brand>> rule of the exampleGrammar as a new static rule in dependence upon the matched value.
For further explanation, a use case is now described for the exampleGrammar:
processed by an ASR engine with the user's spoken utterance for recognition: “Add Crest toothpaste to my shopping list.” The ASR engine treats the dynamic <<brand>> rule as matching <NULL>, matches “add to my shopping list” from the static <command> rule, and matches “toothpaste” from the static <item> rule. So the first pass on the grammar by the ASR engine recognizes “Add <NULL> toothpaste to my shopping list.” With the match value “toothpaste” from the static <item> rule, the definition of the dynamic <<brand>> rule is expanded to http://groceries.com/brand.jsp?item=“toothpaste.” Execution of the computer resource identified in the definition of the dynamic <<brand>> rule yields the new static rule:
A second pass of the recognizer matches the <NULL> component of the utterance against the newly static grammar rule, matching the word “Crest.” The recognizer combines the results of the first and second pass and returns the combined recognition results to the multimodal application: “Add Crest toothpaste to my shopping list.” The ASR engine thus can make an arbitrary number of passes over a single utterance to resolve multiple dynamic rules in a grammar.
Exemplary embodiments of the present invention are described largely in the context of a fully functional computer system for automatic speech recognition with dynamic grammar rules. Readers of skill in the art will recognize, however, that the present invention also may be embodied in a computer program product disposed on computer-readable signal bearing media for use with any suitable data processing system. Such signal bearing media may be transmission media or recordable media for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of recordable media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Examples of transmission media include telephone networks for voice communications and digital data communications networks such as, for example, Ethernets™ and networks that communicate with the Internet Protocol and the World Wide Web. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method of the invention as embodied in a program product. Persons skilled in the art will recognize immediately that, although some of the exemplary embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present invention.
It will be understood from the foregoing description that modifications and changes may be made in various embodiments of the present invention without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present invention is limited only by the language of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5577165 | Takebayashi et al. | Nov 1996 | A |
5584052 | Galau et al. | Dec 1996 | A |
5969717 | Ikemoto | Oct 1999 | A |
6208972 | Grant et al. | Mar 2001 | B1 |
6243375 | Speicher | Jun 2001 | B1 |
6275806 | Pertrushin | Aug 2001 | B1 |
6301560 | Masters | Oct 2001 | B1 |
6374226 | Hunt et al. | Apr 2002 | B1 |
6434529 | Walker et al. | Aug 2002 | B1 |
6513011 | Uwakubo | Jan 2003 | B1 |
6606599 | Grant et al. | Aug 2003 | B2 |
6856960 | Dragosh et al. | Feb 2005 | B1 |
6920425 | Will et al. | Jul 2005 | B1 |
6999930 | Roberts et al. | Feb 2006 | B1 |
7003457 | Halonen et al. | Feb 2006 | B2 |
7035805 | Miller | Apr 2006 | B1 |
7171243 | Watanabe et al. | Jan 2007 | B2 |
7188067 | Grant et al. | Mar 2007 | B2 |
7330890 | Partovi et al. | Feb 2008 | B1 |
7376586 | Partovi et al. | May 2008 | B1 |
7457397 | Saylor et al. | Nov 2008 | B1 |
7487085 | Cross | Feb 2009 | B2 |
7493260 | Harb et al. | Feb 2009 | B2 |
7509659 | McArdle | Mar 2009 | B2 |
7689420 | Paek et al. | Mar 2010 | B2 |
20020065944 | Hickey et al. | May 2002 | A1 |
20020092019 | Marcus | Jul 2002 | A1 |
20020099553 | Brittan et al. | Jul 2002 | A1 |
20020120554 | Vega | Aug 2002 | A1 |
20020143529 | Schmid et al. | Oct 2002 | A1 |
20020147593 | Lewis et al. | Oct 2002 | A1 |
20020184610 | Chong et al. | Dec 2002 | A1 |
20030039341 | Burg et al. | Feb 2003 | A1 |
20030046316 | Gergic et al. | Mar 2003 | A1 |
20030046346 | Mumick et al. | Mar 2003 | A1 |
20030071833 | Dantzig et al. | Apr 2003 | A1 |
20030101451 | Bentolila et al. | May 2003 | A1 |
20030125945 | Doyle | Jul 2003 | A1 |
20030171926 | Suresh et al. | Sep 2003 | A1 |
20030179865 | Stillman et al. | Sep 2003 | A1 |
20030182622 | Sibal et al. | Sep 2003 | A1 |
20030195739 | Washio | Oct 2003 | A1 |
20030216923 | Gilmore et al. | Nov 2003 | A1 |
20030217161 | Balasuriya | Nov 2003 | A1 |
20030229900 | Reisman | Dec 2003 | A1 |
20030235282 | Sichelman et al. | Dec 2003 | A1 |
20040006474 | Gong et al. | Jan 2004 | A1 |
20040019487 | Kleindienst et al. | Jan 2004 | A1 |
20040025115 | Sienel et al. | Feb 2004 | A1 |
20040031058 | Reisman | Feb 2004 | A1 |
20040044516 | Kennewick et al. | Mar 2004 | A1 |
20040049390 | Brittan et al. | Mar 2004 | A1 |
20040059705 | Wittke et al. | Mar 2004 | A1 |
20040083109 | Halonen et al. | Apr 2004 | A1 |
20040120472 | Popay et al. | Jun 2004 | A1 |
20040120476 | Harrison et al. | Jun 2004 | A1 |
20040138890 | Farrans et al. | Jul 2004 | A1 |
20040153323 | Charney et al. | Aug 2004 | A1 |
20040179038 | Blattner et al. | Sep 2004 | A1 |
20040216036 | Chu et al. | Oct 2004 | A1 |
20040236574 | Ativanichayaphong | Nov 2004 | A1 |
20040260562 | Kijirai | Dec 2004 | A1 |
20050075884 | Badt | Apr 2005 | A1 |
20050091059 | Lecoeuche | Apr 2005 | A1 |
20050131701 | Cross | Jun 2005 | A1 |
20050131702 | Bodin et al. | Jun 2005 | A1 |
20050138219 | Boughannam | Jun 2005 | A1 |
20050138647 | Boughannam | Jun 2005 | A1 |
20050154580 | Horowitz et al. | Jul 2005 | A1 |
20050160461 | Baumgartner et al. | Jul 2005 | A1 |
20050188412 | Dacosta | Aug 2005 | A1 |
20050203729 | Roth et al. | Sep 2005 | A1 |
20050203747 | Lecoeuche | Sep 2005 | A1 |
20050261908 | Cross | Nov 2005 | A1 |
20050273769 | Eichenberger | Dec 2005 | A1 |
20050283367 | Cross | Dec 2005 | A1 |
20060047510 | Cross | Mar 2006 | A1 |
20060064302 | Cross | Mar 2006 | A1 |
20060069564 | Allison et al. | Mar 2006 | A1 |
20060074680 | Cross | Apr 2006 | A1 |
20060075120 | Smit | Apr 2006 | A1 |
20060111906 | Cross | May 2006 | A1 |
20060122836 | Cross | Jun 2006 | A1 |
20060123358 | Lee et al. | Jun 2006 | A1 |
20060136222 | Cross | Jun 2006 | A1 |
20060146728 | Engelsma et al. | Jul 2006 | A1 |
20060168095 | Sharma et al. | Jul 2006 | A1 |
20060168595 | McArdle | Jul 2006 | A1 |
20060184626 | Agapi | Aug 2006 | A1 |
20060190264 | Jaramillo | Aug 2006 | A1 |
20060218039 | Johnson | Sep 2006 | A1 |
20060229880 | White | Oct 2006 | A1 |
20060235694 | Cross | Oct 2006 | A1 |
20060287845 | Cross et al. | Dec 2006 | A1 |
20060287865 | Cross et al. | Dec 2006 | A1 |
20060287866 | Cross et al. | Dec 2006 | A1 |
20060288309 | Cross et al. | Dec 2006 | A1 |
20070061146 | Jaramillo et al. | Mar 2007 | A1 |
20070213984 | Ativanichayaphong et al. | Sep 2007 | A1 |
20070265851 | Ben-David et al. | Nov 2007 | A1 |
20070274296 | Cross et al. | Nov 2007 | A1 |
20070274297 | Cross et al. | Nov 2007 | A1 |
20070288241 | Cross et al. | Dec 2007 | A1 |
20070294084 | Cross | Dec 2007 | A1 |
20080065386 | Cross et al. | Mar 2008 | A1 |
20080065387 | Cross et al. | Mar 2008 | A1 |
20080065388 | Cross et al. | Mar 2008 | A1 |
20080065389 | Cross et al. | Mar 2008 | A1 |
20080065390 | Ativanichayaphong et al. | Mar 2008 | A1 |
20080086564 | Putman et al. | Apr 2008 | A1 |
20080140410 | Ativanichayaphong et al. | Jun 2008 | A1 |
20080162136 | Ativanichayaphong et al. | Jul 2008 | A1 |
20080177530 | Cross et al. | Jul 2008 | A1 |
20080195393 | Cross et al. | Aug 2008 | A1 |
20080208584 | Cross et al. | Aug 2008 | A1 |
20080208585 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208586 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208587 | Cross et al. | Aug 2008 | A1 |
20080208588 | Cross et al. | Aug 2008 | A1 |
20080208589 | Cross et al. | Aug 2008 | A1 |
20080208590 | Cross et al. | Aug 2008 | A1 |
20080208591 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208592 | Cross et al. | Aug 2008 | A1 |
20080208593 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208594 | Cross et al. | Aug 2008 | A1 |
20080228494 | Cross et al. | Sep 2008 | A1 |
20080228495 | Cross et al. | Sep 2008 | A1 |
20080235021 | Cross et al. | Sep 2008 | A1 |
20080235022 | Cross et al. | Sep 2008 | A1 |
20080235027 | Cross | Sep 2008 | A1 |
20080235029 | Cross et al. | Sep 2008 | A1 |
20080249782 | Ativanichayaphong et al. | Oct 2008 | A1 |
20080255850 | Cross et al. | Oct 2008 | A1 |
20080255851 | Cross et al. | Oct 2008 | A1 |
Number | Date | Country |
---|---|---|
1385783 | Dec 2002 | CN |
1385783 | Dec 2002 | CN |
1564123 | Jan 2005 | CN |
0794670 | Sep 1997 | EP |
1450350 | Aug 2004 | EP |
0507148.5 | Apr 2005 | GB |
2000155529 | Jun 2000 | JP |
02003140672 | May 2003 | JP |
WO 9948088 | Sep 1999 | WO |
WO 0051106 | Aug 2000 | WO |
WO 0232140 | Apr 2002 | WO |
WO 2004062945 | Jul 2004 | WO |
WO 2006108795 | Oct 2006 | WO |
WO2006108795 | Oct 2006 | WO |
Entry |
---|
W3C, “Speech Recognition Grammar Specification”, Version 1.0, published on Mar. 16, 2004, [online], URL: http://www.w3.org/TR/speech-grammar, retrieved from http://www.archive.org, archived on Jan. 5, 2006. |
Nuance, “Nuance Grammar Developers Guide”, [online], Published 2001. retrieved from http://community.voxeo.com/library/grammar, archived by http://www.archive.org on Oct. 2, 2003. |
Axelsson, et al.; “XHTML+Voice Profile 1.2” Internet, [Online] Mar. 16, 2004 (Mar. 6, 2004), pp. 1-53, XP002484188 Retrieved from the Internet: URL: http://www.voicexml.org/specs/mutlimodal/x+v/12/spec.html [retrieved on Jun. 12, 2008]. |
W3C: “Voice Extensible Markup Language (VoiceXML) Version 2.0” Internet Citation, [Online] XP002248286 Retrieved from the Internet: URL:http://www.w3.org/TR/voicexml20 [retrieved on Jul. 18, 2003]. |
W3C: “Voice Extensible Markup Language (VoiceXML) 2.1, W3C Candidate Recommendation Jun. 13, 2005” Internet, [Online] Jun. 13, 2005 (2005-16-13), pp. 1-34, XP002484189 Retrieved from the Internet: URL:http://www.w3.org/TR/2005/CR-voicexml21-20050613/ [retrieved on Jun. 12, 2008]. |
Didier Guillevic, et al.,Robust Semantic Confidence Scoring ICSLP 2002: 7th International Conference on Spoken Language Processing. Denver Colorado, Sep. 16-20, 2002 International Conference on Spoken Language Processing (ICSLP), Adelaide: Causal Productions, AI, Sep. 16, 2002, p. 853, XP007011561 ISBN:9788-1-876346-40-9. |
U.S. Appl. No. 10/924,520, filed Aug. 24, 2004, Charles W. Cross, Jr. |
U.S. Appl. No. 10/945,112, filed Sep. 20, 2004, Charles W. Cross, Jr. |
U.S. Appl. No. 10/870,517, filed Jun. 17, 2004, Charles W. Cross, Jr. |
U.S. Appl. No. 10/441,839, filed May 20, 2003, S. Ativanichayaphong. |
U.S. Appl. No. 11/062,731, filed Feb. 22, 2005, David Jaramillo. |
U.S. Appl. No. 11/007,830, filed Dec. 8, 2004, Charles W. Cross, Jr. |
U.S. Appl. No. 10/945,119, filed Sep. 20, 2004, Charles W. Cross, Jr. |
U.S. Appl. No. 11/022,464, filed Dec. 22, 2004, Charles W. Cross, Jr. |
U.S. Appl. No. 10/741,997, filed Dec. 19, 2003, Akram Boughannam. |
U.S. Appl. No. 10/741,499, filed Dec. 19, 2003, Akram Boughannam. |
U.S. Appl. No. 11/056,493, filed Feb. 11, 2005, Ciprian Agapi. |
U.S. Appl. No. 11/093,545, filed Mar. 30, 2005, Marc White. |
U.S. Appl. No. 11/105,865, filed Apr. 14, 2005, Charles W. Cross, Jr. |
U.S. Appl. No. 10/849,642, filed May 19, 2004, Charles W. Cross, Jr. |
U.S. Appl. No. 10/992,979, filed Nov. 19, 2004, Charles W. Cross, Jr. |
U.S. Appl. No. 10/733,610, filed Dec. 11, 2003, Charles W. Cross, Jr. |
U.S. Appl. No. 10/919,005, filed Dec. 2005, Eichenberger, et al. |
U.S. Appl. No. 12/109,151, filed Apr. 2008, Agapi, et al. |
U.S. Appl. No. 12/109,167, filed Apr. 2008, Agapi, et al. |
U.S. Appl. No. 12/109,204, filed Apr. 2008, Agapi, et al. |
U.S. Appl. No. 12/109,227, filed Apr. 2008, Agapi, et al. |
U.S. Appl. No. 12/109,214, filed Apr. 2008, Agapi, et al. |
International Search Report and Written Opinion for International application No. PCT/EP2008/051358 mailed Jun. 25, 2008. |
International Search Report and Written Opinion for International application No. PCT/EP2008/051363 mailed Jun. 18, 2008. |
Number | Date | Country | |
---|---|---|---|
20080235022 A1 | Sep 2008 | US |