Claims
- 1. A method of programming an application accessible by a user through one or more computer-based devices, the method comprising the steps of:
representing interactions that the user is permitted to have with the one or more computer-based devices used to access the application by interaction-based programming components; wherein the interaction-based programming components are independent of content/application logic and presentation requirements associated with the application, and further wherein the interaction-based programming components are transcoded on a component by component basis to generate one or more modality-specific renderings of the application on the one or more computer-based devices.
- 2. The method of claim 1, in a client/server arrangement wherein at least a portion of the application is to be downloaded from a server to at least one of the one or more computer-based devices, acting as a client, further comprising the step of including code in the application operative to provide a connection to the content/application logic resident at the server.
- 3. The method of claim 2, wherein the content/application logic connection code expresses at least one of one or more data models, atrribute constraints and validation rules associated with the application.
- 4. The method of claim 1, wherein the one or more modality-specific renderings comprise a speech-based representation of portions of the application.
- 5. The method of claim 4, wherein the speech-based representation is based on VoiceXML.
- 6. The method of claim 1, wherein the one or more modality-specific renderings comprise a visual-based representation of portions of the application.
- 7. The method of claim 6, wherein the visual-based representation is based on at least one of HTML, CHTML and WML.
- 8. The method of claim 1, wherein the user interactions are declaratively represented by the interaction-based programming components.
- 9. The method of claim 1, wherein the user interactions are imperatively represented by the interaction-based programming components.
- 10. The method of claim 1, wherein the user interactions are declaratively and imperatively represented by the interaction-based programming components.
- 11. The method of claim 1, wherein the interaction-based programming components comprise basic elements associated with a dialog that may occur between the user and the one or more computer-based devices.
- 12. The method of claim 11, wherein the interaction-based programming components comprise complex elements, the complex elements being aggregations of two or more of the basic elements associated with the dialog that may occur between the user and the one or more computer-based devices.
- 13. The method of claim 1, wherein one of the interaction-based programming components represent conversational gestures.
- 14. The method of claim 13, wherein the conversational gestures comprise a gesture for encapsulating informational messages to the user.
- 15. The method of claim 13, wherein the conversational gestures comprise a gesture for encapsulating contextual help information.
- 16. The method of claim 13, wherein the conversational gestures comprise a gesture for encapsulating actions to be taken upon successful completion of another gesture.
- 17. The method of claim 13, wherein the conversational gestures comprise a gesture for encapsulating yes or no based questions.
- 18. The method of claim 13, wherein the conversational gestures comprise a gesture for encapsulating dialogues where the user is expected to select from a set of choices.
- 19. The method of claim 18, wherein the select gesture comprises a subelement that represents the set of choices.
- 20. The method of claim 18, wherein the select gesture comprises a subelement that represents a test that the selection should pass.
- 21. The method of claim 20, wherein the select gesture comprises a subelement that represents an error message to be presented if the test fails.
- 22. The method of claim 13, wherein the conversational gestures comprise a gesture for encapsulating rules for validating results of a given conversational gesture.
- 23. The method of claim 13, wherein the conversational gestures comprise a gesture for encapsulating grammar processing rules.
- 24. The method of claim 13, wherein the conversational gestures comprise a gesture for encapsulating dialogues that help the user navigate through portions of the application.
- 25. The method of claim 13, wherein the conversational gestures comprise a gesture for encapsulating a request for at least one of user login and authentication information.
- 26. The method of claim 13, wherein the conversational gestures comprise a gesture for encapsulating a request for constrained user input.
- 27. The method of claim 13, wherein the conversational gestures comprise a gesture for encapsulating a request for unconstrained user input.
- 28. The method of claim 13, wherein the conversational gestures comprise a gesture for controlling submission of information.
- 29. The method of claim 1, further comprising the step of providing a mechanism for defining logical input events and the association between the logical input events and physical input events that trigger the defined logical input events.
- 30. The method of claim 1, wherein the component by component transcoding is performed in accordance with XSL transformation rules.
- 31. The method of claim 1, wherein the component by component transcoding is performed in accordance with Java Bean.
- 32. The method of claim 1, wherein the component by component transcoding is performed in accordance with Java Server Pages.
- 33. The method of claim 1, wherein representation by the interaction-based programming components permits synchronization of the one or more modality-specific renderings of the application on the one or more computer-based devices.
- 34. The method of claim 1, wherein representation by the the interaction-based programming components supports a natural language understanding environment.
- 35. The method of claim 1, further comprising the step of including code for permitting cosmetic altering of a presentational feature associated with the one or more modality-specific renderings of the application on the one or more computer-based devices.
- 36. The method of claim 1, further comprising the step of including code for permitting, changes to riles for transcoding, on a component by component basis to generate the one or more modality-specific renderings of the application on the one or more computer-based devices.
- 37. The method of claim 1, wherein a definition of an underlying data model being, populated is separated from a markup language defining the user interaction.
- 38. The method of claim 1, wherein a node_id attribute is attached to each component and the attribute is mapped over to various outputs.
- 39. The method of claim 1, wherein an author is provided with a pass through mechanism to encapsulate modality-specific markup components.
- 40. The method of claim 1, wherein the components may be active in parallel.
- 41. The method of claim 1, wherein the representation and transcoding is extensible.
- 42. The method of claim 1, wherein a state of the application is encapsulated.
- 43. The method of claim 1, wherein the representation permits reference to dynamically generated data and supports callback mechanisms to the content/application logic.
- 44. Apparatus for use in accessing an application in association with one or more computer-based devices, the apparatus comprising:
one or more processors operative to: (i) obtain the application from an application server, the application being programmatically represented by interactions that the user is permitted to have with the one or more computer-based devices by interaction-based programming components, wherein the interaction-based programming components are independent of content/application logic and presentation requirements associated with the application; and (ii) transcode the interaction-based programming components on a component by component basis to generate one or more modality-specific renderings of the application on the one or more computer-based devices.
- 45. The apparatus of claim 44, wherein the one or more processors are distributed over the one or more computer-based devices.
- 46. The apparatus of claim 44, in a client/server arrangement wherein at least a portion of the application is to be downloaded from a server to at least one of the one or more computer-based devices, acting as a client, further comprising the step of including code in the application operative to provide a connection to the content/application logic resident at the server.
- 47. The apparatus of claim 46, wherein the content/application logic connection code expresses at least one of one or more data models, atrribute constraints and validation rules associated with the application.
- 48. The apparatus of claim 44, wherein the one or more modality-specific renderings comprise a speech-based representation of portions of the application.
- 49. The apparatus of claim 48, wherein the speech-based representation is based on VoiceXML.
- 50. The apparatus of claim 44, wherein the one or more modality-specific renderings comprise a visual-based representation of portions of the application.
- 51. The apparatus of claim 50, wherein the visual-based representation is based on at least one of HTML, CHTML and WML.
- 52. The apparatus of claim 44, wherein the user interactions are declaratively represented by the interaction-based programming components.
- 53. The apparatus of claim 44, wherein the user interactions are imperatively represented by the interaction-based programming components.
- 54. The apparatus of claim 44, wherein the user interactions are declaratively and imperatively represented by the interaction-based programming components.
- 55. The apparatus of claim 44, wherein the interaction-based programming components comprise basic elements associated with a dialog that may occur between the user and the one or more computer-based devices.
- 56. The apparatus of claim 55, wherein the interaction-based programming components comprise complex elements, the complex elements being aggregations of two or more of the basic elements associated with the dialog that may occur between the user and the one or more computer-based devices.
- 57. The apparatus of claim 44, wherein one of the interaction-based programming components represent conversational gestures.
- 58. The apparatus of claim 57, wherein the conversational gestures comprise a gesture for encapsulating informational messages to the user.
- 59. The apparatus of claim 57, wherein the conversational gestures comprise a gesture for encapsulating contextual help information.
- 60. The apparatus of claim 57, wherein the conversational gestures comprise a gesture for encapsulating actions to be taken upon successful completion of another gesture.
- 61. The apparatus of claim 57, wherein the conversational gestures comprise a gesture for encapsulating yes or no based questions.
- 62. The apparatus of claim 57, wherein the conversational gestures comprise a gesture for encapsulating dialogues where the user is expected to select from a set of choices.
- 63. The apparatus of claim 62, wherein the select gesture comprises a subelement that represents the set of choices.
- 64. The apparatus of claim 62, wherein the select gesture comprises a subelement that represents a test that the selection should pass.
- 65. The apparatus of claim 64, wherein the select gesture comprises a subelement that represents an error message to be presented if the test fails.
- 66. The apparatus of claim 57, wherein the conversational gestures comprise a gesture for encapsulating rules for validating results of a given conversational gesture.
- 67. The apparatus of claim 57, wherein the conversational gestures comprise a gesture for encapsulating grammar processing rules.
- 68. The apparatus of claim 57, wherein the conversational gestures comprise a gesture for encapsulating dialogues that help the user navigate through portions of the application.
- 69. The apparatus of claim 57, wherein the conversational gestures comprise a gesture for encapsulating a request for at least one of user login and authentication information.
- 70. The apparatus of claim 57, wherein the conversational gestures comprise a gesture for encapsulating a request for constrained user input.
- 71. The apparatus of claim 57, wherein the conversational gestures comprise a gesture for encapsulating a request for unconstrained user input.
- 72. The apparatus of claim 57, wherein the conversational gestures comprise a gesture for controlling submission of information.
- 73. The apparatus of claim 44, further comprising the step of providing a mechanism for defining logical input events and the association between the logical input events and physical input events that trigger the defined logical input events.
- 74. The apparatus of claim 44, wherein the component by component transcoding is performed in accordance with XSL transformation rules.
- 75. The apparatus of claim 44, wherein the component by component transcoding is performed in accordance with Java Bean.
- 76. The apparatus of claim 44, wherein the component by component transcoding is performed in accordance with Java Server Pages.
- 77. The apparatus of claim 44, wherein representation by the interaction-based programming components permits synchronization of the one or more modality-specific renderings of the application on the one or more computer-based devices.
- 78. The apparatus of claim 44, wherein representation by the the interaction-based programming components supports a natural language understanding environment.
- 79. The apparatus of claim 44, further comprising the step of including code for permitting cosmetic altering of a presentational feature associated with the one or more modality-specific renderings of the application on the one or more computer-based devices.
- 80. The apparatus of claim 44, further comprising the step of including code for permitting changes to rules for transcoding on a component by component basis to generate the one or more modality-specific renderings of the application on the one or more computer-based devices.
- 81. The apparatus of claim 44, wherein a definition of an underlying data model being populated is separated from a markup language defining the user interaction.
- 82. The apparatus of claim 44, wherein a node_id attribute is attached to each component and the attribute is mapped over to various outputs.
- 83. The apparatus of claim 44, wherein an author is provided with a pass through mechanism to encapsulate modality-specific markup components.
- 84. The apparatus of claim 44, wherein the components may be active in parallel.
- 85. The apparatus of claim 44, wherein the representation and transcoding is extensible.
- 86. The apparatus of claim 44, wherein a state of the application is encapsulated.
- 87. The apparatus of claim 44, wherein the representation permits reference to dynamically generated data and supports callback mechanisms to the content/application logic.
- 88. The apparatus of claim 44, wherein the one or more processors are distributed over the one or more computer-based devices and the application is synchronized across the one or more computer-based devices.
- 89. The apparatus of claim 44, wherein the representation of the application further permits cosmetization of the one or more modality-specific renderings via one or more modality-specific markup languages.
- 90. A browser apparatus for use in providing access to an application by a user through one or more computer-based devices, comprising a machine readable medium containing computer executable code which when executed permits the implementation of the steps of:
obtaining the application from an application server, the application being programmatically represented by interactions that the user is permitted to have with the one or more computer-based devices by interaction-based programming components, wherein the interaction-based programming components are independent of content/application logic and presentation requirements associated with the application; and transcoding the interaction-based programming components on a component by component basis to generate one or more modality-specific renderings of the application on the one or more computer-based devices.
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims priority to the U.S. provisional patent application identified by Ser. No. 60/158,777 filed on Oct. 12, 1999, the disclosure of which is incorporated by reference herein. The present application is related to (i) PCT international patent application identified as US99/23008 (attorney docket no. Y0998-392) filed on Oct. 1, 1999; (ii) PCT international patent application identified as US99/22927 (attorney docket no. YO999-111) filed on Oct. 1, 1999; (iii) PCT international patent application identified as US99/22925 (attorney docket no. YO999-1 13) filed on Oct. 1, 1999, each of the above PCT international patent applications claiming priority to U.S. provisional patent application identified as U.S. Ser. No. 60/102,957 filed on Oct. 2, 1998 and U.S. provisional patent application identified as U.S. Ser. No. 60/117,595 filed on Jan. 27, 1999; and (iv) U.S. patent application identified as U.S. Ser. No. 09/507,526 (attorney docket no. YO999-178) filed on Feb. 18, 2000 which claims priority to U.S. provisional patent application identified as U.S. Ser. No. 60/128,081 filed on Apr. 7, 1999 and U.S. provisional patent application identified by Ser. No. 60/158,777 filed on Oct. 12, 1999. The disclosures of all of the above-referenced related applications are incorporated by reference herein.