Fonts with feelings

Information

  • Patent Application
  • 20070226641
  • Publication Number
    20070226641
  • Date Filed
    March 27, 2006
    18 years ago
  • Date Published
    September 27, 2007
    17 years ago
Abstract
Various technologies and techniques are disclosed that improve the instructional nature of fonts and/or the ability to create instructional fonts. Font characters are modified based on user interaction to enhance the user's understanding and/or fluency of the word. The font characters can have sound, motion, and altered appearance. When altering the appearance of the characters, the system operates on a set of control points associated with characters, changes the position of the characters, and changes the influence of the portion of characters on a set of respective spline curves. A designer or other user can customize the fonts and user experience by creating an episode package that specifies words to include in the user interface, and details about actions to take when certain events fire. The episode package can include media effects to play when a particular event associated with the media effect occurs.
Description

BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a diagrammatic view of the properties available in an application operating on a computer system of one implementation.



FIG. 2 is a diagrammatic view of a computer system of one implementation.



FIG. 3 is a high-level process flow diagram for one implementation of the system of FIG. 2.



FIGS. 4-8 are diagrams for one implementation of the system of FIG. 2 illustrating program logic that executes at the appropriate times to implement one or more techniques for the system.



FIG. 9 is a process flow diagram for one implementation of the system of FIG. 2 illustrating the stages involved in tracing characters.



FIG. 10 is a diagrammatic view for one implementation of the system of FIG. 2 illustrating the components that allow customization of the user experience.



FIG. 11 is a diagram for one implementation of the system of FIG. 2 illustrating an exemplary sinusoid transformation.



FIG. 12 is a process flow diagram for one implementation of the system of FIG. 2 illustrating the stages involved in customizing the user experience using content in external files.



FIG. 13 is a process flow diagram for one implementation of the system of FIG. 2 illustrating the stages involved in resizing the words based on user interaction.



FIGS. 14-17 are simulated screens for one implementation of the system of FIG. 2 illustrating variations of the sizing and placement of words based on user interaction.



FIG. 18 is a diagrammatic view for one implementation of the system of FIG. 2 illustrating content being authored by multiple authors and used in customizing the user experience.



FIG. 19 is a process flow diagram for one implementation of the system of FIG. 2 illustrating the stages involved in customizing and processing events.



FIG. 20 is a diagram illustrating a sample text file for one implementation of the system of FIG. 2.



FIG. 21 is a process flow diagram for one implementation of the system of FIG. 2 illustrating the stages involved in processing programmatic events.



FIG. 22 is a logical diagram for one implementation of the system of FIG. 2 and the stages of FIG. 21 illustrating an exemplary programmatic event being processed.



FIG. 23 is a process flow diagram for one implementation of the system of FIG. 2 illustrating the stages involved processing hover events.



FIG. 24 is a logical diagram for one implementation of the system of FIG. 2 and the stages of FIG. 23 illustrating an exemplary hover event being processed.



FIG. 25 is a process flow diagram for one implementation of the system of FIG. 2 illustrating the stages involved in processing a speech event.



FIG. 26 is a logical diagram for one implementation of the system of FIG. 2 and the stages of FIG. 25 illustrating an exemplary speech event being processed.



FIG. 27 is a logical diagram for one implementation of the system of FIG. 2 illustrating an exemplary event being processed for a comic.



FIG. 28 is a process flow diagram for one implementation of the system of FIG. 2 illustrating the stages involved helping a user understand a word.



FIG. 29 is a process flow diagram for one implementation of the system of FIG. 2 illustrating the stages involved helping a user reinforce an understanding they already have of a word.



FIG. 30 is a process flow diagram for one implementation of the system of FIG. 2 illustrating the stages involved in setting some exemplary types of tags in the settings files for customizing the actions.


Claims
  • 1. A computer-readable medium having computer-executable instructions for causing a computer to perform steps comprising: providing an application having a plurality of characters in at least one font;in response to an input from an input device operated by a user, giving at least a portion of the characters an effect that is selected from the group consisting of motion, sound, and altered appearance; andwherein the input is selected from the group consisting of a hover over a particular section of the portion of characters, a selection of the particular section of the portion of characters, and a spoken command associated with the portion of characters.
  • 2. The computer-readable medium of claim 1, wherein the altered appearance make the portion of characters appear to be in motion.
  • 3. The computer-readable medium of claim 1, wherein the effect is designed to aid a word understanding of the user.
  • 4. The computer-readable medium of claim 1, wherein the sound is a spoken pronunciation of a word associated with the portion of characters.
  • 5. The computer-readable medium of claim 1, wherein the sound is a phoneme pronunciation of the word.
  • 6. The computer-readable medium of claim 1, wherein the sound is a normal pronunciation of the word.
  • 7. The computer-readable medium of claim 1, wherein the sound is a pronunciation of the word in a native language of the user.
  • 8. The computer-readable medium of claim 1, wherein the sound is a pronunciation of the word in a foreign language of the user.
  • 9. The computer-readable medium of claim 1, wherein the effect is designed to aid a language fluency of the user.
  • 10. A computer-readable medium having computer-executable instructions for causing a computer to perform steps comprising: providing an application having at least one word in at least one font;upon an interaction with the word by a user, providing an output selected from the group consisting of: a phoneme pronunciation of the word, a normal pronunciation of the word, a pictogram of the word, a multimedia representation of the word, and a designed effect for the word; andwherein a set of data related to the interaction with the word by the user is stored in a data store.
  • 11. The computer-readable medium of claim 10, wherein the selection is a hover over the word with an input device.
  • 12. The computer-readable medium of claim 10, wherein the interaction is a selection of the word with an input device.
  • 13. The computer-readable medium of claim 10, wherein the designed effect is custom designed by a designer.
  • 14. The computer-readable medium of claim 10, further comprising the steps of: repeating the providing the output stages for a plurality of interactions.
  • 15. The computer-readable medium of claim 14, further comprising the steps of: receiving an indication that the user does not fully understand the word;during a first interaction of the plurality of interactions, providing the phoneme pronunciation of the word;during a second interaction of the plurality of interactions, providing the normal pronunciation of the word;during a third interaction of the plurality of interactions, providing the pictogram of the word;during a fourth interaction of the plurality of interactions, providing the multimedia representation of the word; andduring a fifth interaction of the plurality of interactions, providing the designed effect for the word.
  • 16. The computer-readable medium of claim 14, further comprising the steps of: recognizing that the user understands the word; andduring each interaction of the plurality of interactions, randomly providing the output selected from the group consisting of: the phoneme pronunciation of the word, the normal pronunciation of the word, the pictogram of the word, the multimedia representation of the word, and the designed effect for the word.
  • 17. The computer-readable medium of claim 16, further comprising the steps of: ensuring that a same output is not repeated two interactions in a row.
  • 18. A method for aiding a word understanding comprising: providing an application having a plurality of characters in at least one font;in response to an input from an input device operated by a user, altering at least a portion of the characters, the altering step comprising: operating on a set of control points associated with the portion of characters;changing a position of the portion of characters; andchanging an influence of the portion of characters on a set of respective spline curves; andwherein the input is selected from the group consisting of a hover over a particular section of the portion of characters, a selection of the particular section of the portion of characters, and a spoken command associated with the portion of characters.
  • 19. The method of claim 18, wherein the altering step alters the portion of characters to a format the user can trace.
  • 20. A computer-readable medium having computer-executable instructions for causing a computer to perform the steps recited in claim 1.