Claims
- 1. A method of generating content indicating steps taken on a user interface to perform a task, the method comprising:
receiving a user input indicative of user manipulation of a control on the user interface; and recording, in response to the user input, an image of the control manipulated by the user on the user interface.
- 2. The method of claim 1 and further comprising:
displaying the recorded image of the control on an editor component configured to receive a textual description of the user manipulation of the control.
- 3. The method of claim 2 and further comprising:
prior to displaying the recorded image on an editor component, automatically generating text corresponding to user manipulation of the control.
- 4. The method of claim 3 and further comprising:
embedding the image in the textual description.
- 5. The method of claim 2 wherein recording an image comprises:
receiving position information indicative of a position of the control on the user interface.
- 6. The method of claim 5 wherein recording an image comprises:
receiving size information indicative of a size of the control on the user interface.
- 7. The method of claim 6 wherein recording comprises:
recording context image information indicative of a context image showing a context of the control on the user interface based on the size information and the position information.
- 8. The method of claim 7 wherein recording context information comprises:
calculating the context information to record based on a heuristic.
- 9. The method of claim 7 wherein recording context information comprises recording an image about at least a portion of a periphery of the image of the control.
- 10. The method of claim 7 wherein recording an image comprises:
receiving parent window information indicative of a parent window of the control on the user interface.
- 11. The method of claim 10 wherein recording an image comprises:
recording an image of the parent window of the control.
- 12. The method of claim 11 wherein displaying the recorded image comprises:
displaying the image of the control and the context image and the image of the parent window on the editor component.
- 13. The method of claim 12 wherein displaying comprises:
displaying the image of the control and the context image and the image of the parent window on a first display portion of the editor component; and displaying selectable indicators of the steps taken to perform the task on a second display portion of the editor component.
- 14. The method of claim 13 wherein the editor component is configured such that when one of the selectable indicators is selected, images corresponding to the step associated with the selected indicator are displayed on the first display portion of the editor component.
- 15. A content generation system for generating content describing steps taken by a user to perform a task on a user interface, comprising:
a recording system configured to receive an indication that the user has taken a step and to record an image of at least a portion of the user interface identifying the step.
- 16. The content generation system of claim 15 and further comprising:
an editor component configured to display recorded images and receive associated text.
- 17. The content generation system of claim 16 wherein the editor component is configured to generate final content with the images embedded in the associated text.
- 18. The content generation system of claim 16 wherein the recording system is configured to record step identifying information identifying the recorded step.
- 19. The content generation system of claim 18 and further comprising:
an automatic text generation system configured to receive the step identifying information and automatically generate text describing the step based on the identifying information.
- 20. The content generation system of claim 16 wherein the user takes the step by manipulating a control on the user interface and wherein the recording system comprises:
a component configured to identify a position on the user interface, and a size, of the control manipulated by the user.
- 21. The content generation system of claim 20 wherein the recording system is configured to record an image of the control based on the position and size of the control on the user interface.
- 22. The content generation system of claim 21 wherein the recording system is configured to identify a contextual image, larger than the image of the control, and to record the contextual image.
- 23. The content generation system of claim 22 wherein the recording system is configured to record an image of a parent window on the user interface that is parent to the control.
- 24. The content generation system of claim 23 wherein the editor component is configured to display the contextual image and the image of the parent window on a first portion of a display screen.
- 25. The content generation system of claim 24 wherein the editor component is configured to display the text associated with the steps taken by the user on a second portion of the display screen, the text including a plurality of indicators, one for each step.
- 26. The text generation system of claim 25 wherein the editor component is configured to receive a user selection of one of the indicators in the second portion of the display screen and display the contextual image and the image of the parent window associated with the selected indicator on the first portion of the display screen.
- 27. A computer readable medium storing instructions which, when read by a computer, cause the computer to perform steps of:
detecting a user manipulation of an element on a user interface; and recording, in response to the user manipulation, an image from the user interface indicative of the element.
- 28. The computer readable medium of claim 27 wherein detecting comprises:
identifying a size and position of the element.
- 29. The computer readable medium of claim 28 wherein recording comprises:
recording an image of the element based on the size and position of the element.
- 30. The computer readable medium of claim 29 wherein recording an image of the element comprises:
recording an image of context around at least a portion of the element on the user interface.
- 31. The computer readable medium of claim 30 wherein detecting comprises:
detecting a parent window of the element; and recording an image of the parent window.
- 32. The computer readable medium of claim 31 and further comprising:
displaying the image of the element and the context and the parent window on an editor configured to receive associated text.
- 33. The computer readable medium of claim 27 and further comprising:
automatically generating text associated with the recorded image.
- 34. The computer readable medium of claim 27 wherein the element comprises a control element.
- 35. The computer readable medium of claim 34 wherein the element comprises a text box.
RELATED APPLCIATIONS
[0001] The present invention is a continuation-in-part of co-pending related U.S. patent application Ser. No. 10/337,745, filed Jan. 7, 2003, entitled ACTIVE CONTENT WIZARD: EXECUTION OF TASKS AND STRUCTURED CONTENT; Reference is made to U.S. patent applications Ser. No. 10/------, filed Jul. 8, 2004, entitled AUTOMATIC TEXT GENERATION; and U.S. patent applications Ser. No. 10/------, filed Jul. 8, 2004, entitled IMPORTATION OF AUTOMATICALLY GENERATED CONTENT, and assigned to the same assignee as the present invention.
Continuation in Parts (1)
|
Number |
Date |
Country |
Parent |
10337745 |
Jan 2003 |
US |
Child |
10887414 |
Jul 2004 |
US |