The present invention relates generally to the field of telephony. More particularly, the invention provides a technique for providing multi-modal bookmarks. In accordance with exemplary aspects of the invention, a user who browses the web using a visual browser may bookmark a page, and that bookmark is later available to the user for browsing the web in voice mode.
As computer technology becomes more widely available, the transmission of voice information has become increasingly intertwined with the transmission of data. Thus, many telephones enable a user to interact with content either in an audio mode (i.e., using a speaker and microphone), or a visual mode (i.e., using a visual display and some type of input device such as a keypad).
One application of modern telephones is the use of the “wireless web.” By using a telephone (e.g., a wireless telephone) with a WAP browser that renders Wireless Markup Language (WML) content, a user can access the Internet. Similarly, the user can also access the internet by telephone using a “voice browser” that allows a user to interact with Voice eXtensible Markup Language (VXML) content. Content is not always amenable to a single mode of communication. That is, in some cases it is most convenient to allow a user to interact with content either in voice mode or visual mode, rather than to restrict the user to a particular mode. An architecture that permits a user to interact with a single application (or item of content) in both voice and visual modes can be referred to as “multi-modal.”
One problem that arises when a user wishes to access the web in both visual and voice modes is that a user may bookmark a page using a first browser (e.g., a visual browser), and, when the user accesses the web using a different browser (e.g., a voice browser), that page is no longer bookmarked, because the bookmark is accessible only by a particular browser.
In view of the foregoing, there is a need for a bookmark system that overcomes the drawbacks of the prior art.
The present invention provides a system for storing bookmarks, such that the bookmarks can be accessed using a plurality of browsers. For example, a user may browse the wireless web with a visual browser (e.g., a WAP browser), and may bookmark a particular page. The bookmark is then stored in a bookmark repository, which is accessible to both the visual browser and a voice browser. When the same user browses the wireless web using a voice browser, the user may access the bookmark that was created with the visual browser. The content used with the visual browser need not be identical to the content used with the voice browser; a relationship may be defined that links bookmarked visual content with equivalent voice content. The use of bookmarks in the context of the wireless web, visual browsers, and voice browsers is merely exemplary; in accordance with the invention, bookmarks may be used in connection with content other than wireless web content, and in connection with applications other than visual and voice browsers.
Other features of the invention are described below.
The foregoing summary, as well as the following detailed description of preferred embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, there is shown in the drawings exemplary constructions of the invention; however, the invention is not limited to the specific methods and instrumentalities disclosed. In the drawings:
The present invention provides a system and method whereby a user who browses content using a first browser (e.g., a visual browser) may bookmark a page such that the bookmark is available to the user for browsing the content with a second browser (e.g., a voice browser). For example, a user may bookmark a particular retail web site using a WAP browser, and then use the bookmark to access that page in a separate session with a voice browser.
In a preferred embodiment, wireless telephone 102 comprises a visual display 104, an audio speaker 105, a keypad 106, a microphone 107, and an antenna 108. Visual display 104 may, for example, be a Liquid Crystal Display (LCD) which displays text and graphics. Audio speaker 105 renders audio signals (e.g., signals received from other components in architecture 100) in order to produce audible sound. Keypad 106 may be an alpha-numeric keypad that allows a user to input alpha-numeric characters. Depending upon context, wireless telephone may respond to input from keypad 106 by displaying appropriate characters on display 104, transmitting ASCII representations of such characters, or (in the case of numeric input) generating appropriate DTMF signals. Microphone 107 captures audio signals, which may, in one example, be digitally sampled by wireless telephone 102 for wireless transmission to other components of network architecture 100. Antenna 108 is used by wireless telephone 102 to transmit information to, and receive information from, components within architecture 100. For example, wireless telephone 102 may use antenna 108 to receive digital audio signals for rendering on speaker 105, to transmit digital audio signals captured by microphone 107, to receive data to be displayed on visual display 104, or to transmit data captured with keypad 106. Wireless telephone 102 may also contain computing components (not shown). For example, wireless telephone 102 may have a memory and a processor, which may be used to store and execute software (e.g., software that digitally samples audio signals captured with microphone 107, software that generates analog audio signals from digitally-sampled audio received through antenna 108, software that enables the browsing of content using visual display 104 and keypad 106, etc.). In one example, wireless telephone 102 may include a WAP browser that includes the capability to bookmark pages of the wireless web. The structure of a wireless telephone 102 that employs the components shown in
One feature of wireless telephone 102 is that it can be viewed as having two different “modes” of communication. On the one hand, wireless telephone 102 communicates in a “voice” mode; on the other hand, wireless telephone 102 communicates in a “visual” mode. In voice mode, wireless telephone uses microphone to capture audio (which may be digitally sampled and then transmitted through antenna 108), and uses speaker to render audio (which may be received through antenna 108 in a digital form). “Voice” mode is exemplified by the conventional usage of a telephone in which a first party uses the telephone to engage in two-way speech with another party. In “visual” mode, wireless telephone uses keypad 106 to capture data (e.g., alpha-numeric data which may be represented in ASCII form), and uses visual display 104 to render data. The captured data may be transmitted through antenna 108, and antenna 108 may also be used to receive the data that is to be displayed on visual display 104.
Wireless telephone 102 communicates with a wireless network switch 110. Wireless network switch is coupled to a tower (not shown) that engages in two-way communication with wireless telephone 102 through antenna 108. Wireless network switch 110 connects wireless telephone 102 to various components, such as multi-modal platform 114, or Public Switched Telephone Network (PSTN) (not shown). Multi-modal platform 114 is described in further detail below; a PSTN is known in the art and thus is not described herein.
In accordance with aspects of the invention, multi-modal platform 114 may facilitate communication with wireless telephone 102 in two “modes” (i.e., in voice mode and visual mode). For example, multi-modal platform 114 may be adapted to send audio information to and receive audio information from wireless telephone 102 through switch 110 using a voice channel. Multi-modal platform 114 may likewise be adapted to send visual data to and receive visual data from wireless telephone 102 through switch 110 using a data channel. Moreover, multi-modal platform 114 may be adapted to change between these two “modes” of communicating according to instructions or existing communications conditions. Multi-modal platform 114 may be embodied as a computing device programmed with instructions to perform these functions.
Multi-modal platform 114 may be or comprise a computing device that includes various types of processing capability and storage. In one example, multi-modal platform includes an application engine 116. Application engine 116 is software that provides content to a user—either content located on multi-modal platform, or content that is generated elsewhere (e.g., at a web site provider's computing device). Application engine 116 provides the content either in a voice form or a visual form, depending on which mode is appropriate for telephone 102 to interact with the content. (The “appropriateness” of a particular mode may be determined by an express user instruction, an instruction from the underlying application, or some analysis of the operating conditions that weigh in favor of one mode or another.)
Multi-modal platform 114 may include, or be associated with, bookmark repository 120. As noted above, telephone 102 may include a WAP browser that allows a user to browse the wireless web and to bookmark content therein. An exemplary process of bookmarking a page is described with reference to both
In one embodiment, when the user bookmarks a page (
Bookmark repository 120 is accessible to voice browser 118. Thus, if the user, subsequent to bookmarking a page as described above, engages in a browsing session using voice browser 118, the user may call upon the bookmark and return to the bookmarked page (block 206). Thus, since bookmark repository 118 is accessible to both the visual browser and the voice browser, the user can access the same set of bookmarks regardless of whether he is browsing in voice mode or visual mode. If the user calls upon the bookmark in a mode that is different from the mode in which he originally generated the bookmark, the user will be brought to an equivalent page in the new mode (block 208), as more particularly described below.
It will be appreciated that not all visual content has an exact analogue in voice, and vice versa. Thus, a mapping may be created that links visual content with equivalent voice content, such that every bookmark (or at least a subset of all potential bookmarks) has a defined meaning for both visual and voice browsing. The nature of this mapping depends on the relationship between visual and voice content. Every bookmark also is itself multimodal and can be accessed in both voice and visual modes.
In one example, every visual page has an equivalent voice page (e.g., a content provider provides a given page of content in both Wireless Markup Language (WML) and in Voice eXtensible Markup Language (VXML)). In another example, every Universal Record Locator (URL) points to plural pages written in various markup languages, and every GET request for a URL includes an identification of the type of browser that generated the request. In this case, no mapping is necessary, since the server at which the URL is located provides a different page based on what type of browser is identified in the GET request. In yet another example, a complex relationship between visual and voice content is defined, such that every bookmark entered in one mode will have meaning in the other mode, even if there is no obvious canonical relationship between voice and visual content pages. One context in which this type of mapping may be necessary is if a bookmark generated in WML or VXML is to have meaning for Hypertext Markup Language (HTML) content, since HTML content is often structured differently from its WML or VXML counterparts.
According to one feature of the invention, bookmark repository 120 may maintain a hierarchical structure of bookmarks, as specified by the user. Thus, if a user organizes bookmarks in folders (e.g., “sports,” “music,” etc.), the voice system may provide menus based on this hierarchical structure. Alternatively, a user may be able to speak the name of a bookmark, which bypasses the menus that support the hierarchical structure (or queries the user as to which sub-tree of the hierarchical structure the user intended, if two bookmarks have a common, non-unique name).
According to another feature of the invention, a bookmark may specify a preferred “mode” for interacting with the bookmark. In other words, the bookmark may be stored in a data structure that includes, as one of its fields, a specification of mode (e.g., voice or visual), wherein, upon re-directing to the bookmarked content, multi-modal platform 114 switches to the mode specified in the bookmark. One variation on this design is that platform 114 may take into account the specified mode, but may override it based on a variety of factors (e.g., the unavailability of the specified mode, overriding instructions generated by the applications that provides the content, an overriding instruction from a user, etc.) Another variation on this design is to allow the user to specify the preferred mode as being the mode that the user is currently using at the time the bookmark is accessed.
According to another feature of the invention, bookmarks may be sent by a browser to bookmark repository 120 either continuously (i.e., as they are generated by the user), or in a “batch” mode (i.e., once per hour, once per day, once per week, etc.). The repository 120 allows for persistence. In a variation on the invention, repository 120 may be situated at the wireless telephone 102.
According to another feature of the invention, bookmarks may be “annotated.” That is, the user may provide his or her own description of the bookmark. As one example, the user may type the description when bookmarking content in visual mode, and this description may be “read” (e.g., by a voice synthesis system) when the user accesses the bookmark in voice mode.
While the foregoing examples have been provided in the context of wireless telephony, it should be noted that the above-described system can also be deployed in any communications context, such as a wired telephone system. For example, wireless telephone switch 110 can be a wired-network telephone switch, such as a “5E,” and wireless telephone 102 may be embodied as an wired telephone that is enabled to receive both voice and data.
It is noted that the foregoing examples have been provided merely for the purpose of explanation and are in no way to be construed as limiting of the present invention. While the invention has been described with reference to various embodiments, it is understood that the words which have been used herein are words of description and illustration, rather than words of limitations. Further, although the invention has been described herein with reference to particular means, materials and embodiments, the invention is not intended to be limited to the particulars disclosed herein; rather, the invention extends to all functionally equivalent structures, methods and uses, such as are within the scope of the appended claims. Those skilled in the art, having the benefit of the teachings of this specification, may effect numerous modifications thereto and changes may be made without departing from the scope and spirit of the invention in its aspects.
This application is a continuation of U.S. patent application Ser. No. 10/211,117, titled “System and Method for Providing Multi-Modal Bookmarks,” filed Aug. 2, 2002, which claims the benefit of U.S. Provisional Application No. 60/310,610, entitled “System and Method for Providing Multi-Modal Bookmarks,” filed Aug. 7, 2001, the entire contents of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5828468 | Lee et al. | Oct 1998 | A |
5844979 | Raniere et al. | Dec 1998 | A |
5944791 | Scherpbier | Aug 1999 | A |
6018710 | Wynblatt et al. | Jan 2000 | A |
6101510 | Stone et al. | Aug 2000 | A |
6125376 | Klarlund et al. | Sep 2000 | A |
6195357 | Polcyn | Feb 2001 | B1 |
6208839 | Davani | Mar 2001 | B1 |
6282511 | Mayer | Aug 2001 | B1 |
6349132 | Wesemann et al. | Feb 2002 | B1 |
6366578 | Johnson | Apr 2002 | B1 |
6418199 | Perrone | Jul 2002 | B1 |
6424945 | Sorsa | Jul 2002 | B1 |
6490603 | Keenan et al. | Dec 2002 | B1 |
6570966 | Freeman et al. | May 2003 | B1 |
6594348 | Bjurstrom et al. | Jul 2003 | B1 |
6604075 | Brown et al. | Aug 2003 | B1 |
6606611 | Khan | Aug 2003 | B1 |
6757718 | Halverson et al. | Jun 2004 | B1 |
6766298 | Dodrill et al. | Jul 2004 | B1 |
6807254 | Guedalia et al. | Oct 2004 | B1 |
6842767 | Partovi et al. | Jan 2005 | B1 |
6983307 | Mumick et al. | Jan 2006 | B2 |
7116765 | Summers et al. | Oct 2006 | B2 |
7210098 | Sibal et al. | Apr 2007 | B2 |
7289606 | Sibal et al. | Oct 2007 | B2 |
20010006890 | Ryu | Jul 2001 | A1 |
20010018353 | Ishigaki | Aug 2001 | A1 |
20010032234 | Summers et al. | Oct 2001 | A1 |
20020095293 | Gallagher et al. | Jul 2002 | A1 |
20020194388 | Boloker et al. | Dec 2002 | A1 |
20030032456 | Mumick et al. | Feb 2003 | A1 |
20030071833 | Dantzig et al. | Apr 2003 | A1 |
20030162561 | Johnson et al. | Aug 2003 | A1 |
20030179865 | Stillman et al. | Sep 2003 | A1 |
20030187656 | Goose et al. | Oct 2003 | A1 |
20030208472 | Pham | Nov 2003 | A1 |
20040006474 | Gong et al. | Jan 2004 | A1 |
20040034531 | Chou et al. | Feb 2004 | A1 |
20050049862 | Choi et al. | Mar 2005 | A1 |
20050203747 | Lecoeuche | Sep 2005 | A1 |
20050273487 | Mayblum et al. | Dec 2005 | A1 |
20050288063 | Seo et al. | Dec 2005 | A1 |
20060165104 | Kaye | Jul 2006 | A1 |
20060287845 | Cross et al. | Dec 2006 | A1 |
20070260972 | Anderl | Nov 2007 | A1 |
20080086564 | Putman et al. | Apr 2008 | A1 |
Number | Date | Country |
---|---|---|
0848373 | Jun 1998 | EP |
1100013 | May 2001 | EP |
1423968 | Jun 2004 | EP |
WO 9946920 | Sep 1999 | WO |
WO 9955049 | Oct 1999 | WO |
WO 2004014053 | Feb 2004 | WO |
WO 2007130256 | Nov 2007 | WO |
Entry |
---|
Hickey et al., “Multimodal Requirements for Voice Markup Languages”, W3C Working Drafts, Jul. 10, 2000, 1-17. |
Huang et al., “MIPAD: A next generation PDA Prototype”, Proceedings of the International Conference on Spoken Language Processing, Beijing, China, Oct. 16, 2000, 1-4. |
International Patent Application No. PCT/US2002/21058: International Search Report dated Dec. 16, 2002, 4 pages. |
International Patent Application No. PCT/US2002/24885: International Search Report dated Nov. 26, 2002, 1 page. |
International Patent Application No. PCT/US2007/009102: International Search Report dated Feb. 25, 2008, 2 pages. |
International Patent Application No. PCT/US2007/009102: Written Opinion dated Feb. 25, 2008, 4 pages. |
Kennedy et al., “HTML, The Definitive Guide, Third Edition”, Copyright 1998, pp. 211-215. |
Lau et al., “WebGALAXY: Beyond point and click—A Conversational Interface to a Browswer”, Computer Networks and ISDN Systems, Sep. 1997, 29(8-13), 1385-1393. |
Maes et al., “Multi-model Interaction in the age of information appliances”, Multimedia and Expo, 2000, ICME 2000, 2000 IEEE International Conference on NY, NY, Jul. 30-Aug. 2, 2000, 1, 15-18. |
Niklfeld et al., “Component-Based multimodal dialog interfaces for mobile knowledge creation”, Proceedings of the ACL 2001 Workshop on Human Language Technology and Knowledge Management, Toulouse, France, Jul. 6-7, 2001, 1-8. |
Number | Date | Country | |
---|---|---|---|
20120244846 A1 | Sep 2012 | US |
Number | Date | Country | |
---|---|---|---|
60310610 | Aug 2001 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10211117 | Aug 2002 | US |
Child | 13491465 | US |