Some embodiments relate to printing personalized storybooks, and more specifically to printing storybooks that are created based on user interactions within a virtual environment.
People interact with virtual environments such as video games or massively-multiplayer online games. A participant's client device, or computer, typically accesses a computer-simulated world, and presents perceptual stimuli to the user. Users can operate their client devices or computer input/output (I/O) devices to manipulate elements of the game world. For example, a user may identify with a character and move that character within the game to interact with elements in the environment. These elements may include non-player characters and other users' characters. When a user or participant interacts with an element of the game, the rendering of the game is updated to represent its new internal state.
Some details of the user interactions may be encapsulated in logs that are later used in debugging or other forms of reactive software development. These logs are not accessible to the user of the game and their format is not intended to be readable by the user.
In some cases, users find it useful to capture still or moving images of a video game. For example, a user may find it useful to have a still image of an interaction for reflection, review, or sharing with friends, family, or others. Current mechanisms and techniques for grabbing images rely on screen grab or screen capture software on the user's computing device. Such screen grab mechanisms capture the contents of a device's screen, a window, or the user's desktop into a picture (or video) file that can later be opened using image preview applications. Hence, present capture mechanisms for use in video games require initiation by the user of the client device and rely on software installed on the computing device.
Implementations disclosed herein address the above deficiencies and other problems associated with providing video game users narrative representations of their interactions with the game. Images are captured by the video game itself, associated with narrative text, and presented as a sequence. The user can adapt the narrative text and/or images to create a customized story that represents the game play.
In accordance with some implementations, a method is provided that creates storybooks corresponding to user interactions with a virtual game environment. The method is performed at a computing device having one or more processors and memory storing one or more programs configured for execution by the one or more processors. The computing device receives user input to control actions of a virtual character within the virtual game environment and records, without user input, a temporal sequence of events from the game environment. Each event represents an interaction of the virtual character with the game environment, and each event includes a respective image and respective text describing the respective event. Subsequent to the recording, the computing device presents to the user a sequence of simulated pages corresponding to the sequence of events. Each simulated page includes at least a portion of the respective recorded image for a respective event and includes at least a portion of the respective text describing the event. For at least a subset of the simulated pages, the computing device receives user input to modify at least a portion of the respective text. The computing device generates a file that includes the sequence of simulated pages as modified by the user.
In some implementations, the method includes facilitating a printing of the file to create a tangible book, which includes the simulated pages as modified by the user.
In some implementations, the file has a file type that is one of JPEG, TIFF, BMP, PNG, PDF, EPUB, or MOBI.
In some implementations, the method includes transmitting the file to a remote book printing provider with instructions to ship a bound book corresponding to the file to a specified geographic address.
Generally, the method includes displaying the virtual game environment and the virtual character on a display device associated with the computing device. In some implementations, displaying the virtual game environment includes displaying respective narrative text corresponding to a respective displayed virtual scene. In some implementations, the method includes receiving user activation of a user interface control to include the respective narrative text with a first event. The activation occurs during user interaction with the virtual game environment. In some instances, at least a portion of the respective narrative text is included in a simulated page corresponding to the first event.
In some implementations, the virtual character is a sentient being within the virtual game environment. In some implementations, the user can choose a virtual character to represent herself/himself. In some implementations, the user can specify various characteristics of the selected virtual character, such as gender, age, size, clothing or skin tone, and so on.
In some implementations, a first event includes two or more images. In some of these implementations, the first event corresponds to a first simulated page, and the first simulated page includes the two or more respective images. In other implementations, the first event corresponds to a plurality of simulated pages, and each simulated page in the plurality of simulated pages includes a respective one of the two or more images.
In some implementations, the respective text for a second simulated page includes a plurality of text options, and the user selects one of the plurality of text options. In some implementations, alternative text options are provided for specific words or phrases rather than the entire respective text as a whole.
In some implementations where an event includes multiple images, the event corresponds to a first simulated page, and the multiple images are presented to the user for user selection. In some implementations, while presenting the first simulated page to the user, the plurality of images are presented as alternative options for the first simulated page and the user selects one (or more) of the plurality of images. The selected image (or multiple images) for the first simulated page are included in the generated file.
In some implementations, the method includes receiving a user-provided name for the virtual character, and the user-provided name is included in the respective text on one or more of the simulated pages. In some implementations, the user-provided name identifies the virtual character. In some implementations, the user may provide other attributes for the virtual character, such as gender, age, or other physical attributes.
In some implementations, each event includes a respective caption, distinct from the respective text describing the respective event, and the respective caption is displayed accompanying the respective image in a respective simulated page. In some implementations, the additional caption is editable, but in other implementations it is immutable.
In some implementations, a third event includes one or more labels that identify locations within the virtual game environment when the third event is recorded, and the one or more labels are displayed on a third simulated page to identify the locations.
In some implementations, a fourth event includes data that identifies a state of the virtual game environment when the fourth event is recorded, and the data is displayed on a fourth simulated page to convey the state of the virtual game environment.
In some implementations, a fifth event includes one or more labels that identify other virtual characters or virtual objects within the virtual game environment when the fifth event is recorded, and the one or more labels are displayed on a fifth simulated page to identify the other virtual characters or virtual objects.
In some implementations, a sixth event includes a conversation between the virtual character and one or more other virtual characters within the virtual game environment at the time the sixth event is recorded, and the conversation is displayed in a textual format on a sixth simulated page. In some implementations, the virtual character can have a conversation with a virtual assistant or virtual object as well.
In some implementations, a seventh event includes one or more labels that identify virtual objects collected by the virtual character when the seventh event is recorded, and the one or more labels are displayed on a seventh simulated page to identify the objects.
In some implementations, an eighth event includes one or more labels that identify achievements of the virtual character when the eighth event is recorded, and the one or more labels are displayed on a eighth simulated page to identify the achievements.
In some implementations, a first event includes one or more labels that identify current game data at the time the first event is recorded. The current game data includes one or more of: named locations within the virtual game environment; a state of the virtual game environment; other virtual characters or virtual objects within the virtual game environment; virtual objects collected by the virtual character; and achievements of the virtual character. The one or more labels are displayed on a first simulated page corresponding to the first event.
In some implementations, the sequence of simulated pages includes one or more simulated pages that include graphics that are one or more of: a map of at least a portion of the virtual game environment, including a path on the map showing movement of the virtual character within the virtual game environment; a photograph of the user taken by a photo sensor associated with the computing device, where the photograph is taken during the user's interaction with the virtual game environment; an image of virtual objects collected by the virtual character; and an image depicting a certificate of achievement of the virtual character in the virtual game environment.
In some implementations, one or more simulated pages include multimedia attachments that are one or more of: a video clip from the virtual game environment; an audio clip from the virtual game environment; a video clip of the user interacting with the virtual game environment; and an audio clip of the user interacting with the virtual game environment.
In some implementations, at least a subset of the events are automatically recorded, without human intervention, when the virtual character reaches a milestone in the virtual game environment.
In some implementations, a system is provided that creates storybooks corresponding to user interactions with a virtual game environment. The system includes one or more processors and memory storing one or more programs configured for execution by the one or more processors. The system receives user input to control actions of a virtual character within the virtual game environment and records, without user input, a temporal sequence of events from the game environment. Each event represents an interaction of the virtual character with the game environment, and each event includes a respective image and respective text describing the respective event. Subsequent to the recording, the system presents to the user a sequence of simulated pages corresponding to the sequence of events. Each simulated page includes at least a portion of the respective recorded image for a respective event and includes at least a portion of the respective text describing the event. For at least a subset of the simulated pages, the system receives user input to modify at least a portion of the respective text. The system generates a file that includes the sequence of simulated pages as modified by the user.
In some implementations, a system is provided that performs any of the method steps listed above.
In some implementations, a non-transitory computer readable storage medium is provided that stores programs for creating storybooks corresponding to user interactions with a virtual game environment. The computer readable storage medium stores one or more programs configured for execution by a computing device. The programs are configured to receive user input to control actions of a virtual character within the virtual game environment and to record, without user input, a temporal sequence of events from the game environment. Each event represents an interaction of the virtual character with the game environment, and each event includes a respective image and respective text describing the respective event. Subsequent to the recording, the programs are configured to present to the user a sequence of simulated pages corresponding to the sequence of events. Each simulated page includes at least a portion of the respective recorded image for a respective event and includes at least a portion of the respective text describing the event. The programs are configured to receive user input to modify at least a portion of the respective text. The programs are configured to generate a file that includes the sequence of simulated pages as modified by the user.
In some implementations, a non-transitory computer readable storage medium is provided that stores one or more programs. The one or more programs include instructions for performing any of the method steps above.
In some implementations, a video game records the interactions between a user and elements of the game in the form of events. The events encapsulate the metadata, descriptions, editable captions, and images of the user (the user's character) interacting with the game. The sequence of recorded events is then presented to the user, for example, following the completion of one or more sessions in the game. Each event caption takes the form of a brief narrative describing the user's interaction, or the interaction of the game character. In some implementations, the user is then provided with options to change words in the captions using a set of provided alternatives. In some instances, the alternatives include synonyms or antonyms provided by the developer of the game. For example, in a caption such as “The robot ran down the corridor quickly,” the alternatives offered to the user for “ran down,” “corridor,” and “quickly” may respectively include: “trundled along,” “passed through,” “fell down,” and “waddled along;” “passageway,” “hallway,” “tunnel,” and “shaft;” “swiftly,” “slowly,” “rapidly,” and “clumsily.”
The user may indicate that the recording of events should be transformed into a commonly-used digital printable format (e.g., ePub) so that a hard copy (e.g., a book) may be created for printing or sharing with others via digital and electronic communication mechanisms. The printable format includes a sequence of renderings of the events in the form of narrative text, images, and other details derived from the events. The rendering of the hard copy may be a conventional hard-copy book including such notions as front and back covers, bindings and so on. The user is given the option to specify other attributes for the book, such as title, author, and date, which are rendered on appropriate elements of the digital book (e.g., the front cover).
The recording methods and systems described herein provide numerous benefits and advantages over prior recording mechanisms. For example, in a learning context, users are able to reflect on their interactions within the game and construct narratives that best suits their experience. Furthermore, because the narrative representation is situated and accessed from within the game, it provides a deeper connection to the user's learning experience. Prior techniques would only support recording images from one scene in a video game, without the benefit of a narrative complement to these images. In other words, the user's learning experience within the game is enriched by the storybook mechanism.
In some implementations, the user is able to view a sequence of captioned images that represent interaction within the game. The user can personalize captions associated with these images, thereby forming a personalized narrative representation of the interactions. This representation may be separated into sections, such as chapters, and can be accessed and manipulated from within the game. The narrative representation is available in a printable form, and can be shared with others using commonly available electronic digital communications tools.
Thus methods and systems are provided that dynamically create personalized storybooks based on user interactions within a virtual environment.
For a better understanding of the aforementioned implementations of the invention as well as additional implementations thereof, reference should be made to the Description of Implementations below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
Reference will now be made to implementations, examples of which are illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that the present invention may be practiced without these specific details.
After the digital book is created, it may be sent (114) to a printer or bookbinder 106, which prints/binds (116) a tangible book 108 that corresponds to the user's interaction with the virtual game environment 110. The tangible book 108 may be shipped (118) to the user 102 or to any other person, such as a friend or relative.
In some instances, the digital book is distributed electronically instead of, or in addition to, creating a tangible book 108. In some implementations, the digital book may be read electronically, either using an eBook reader or other software application. In some implementations, a book file 226 is transmitted (114′) to a web server 130, which can store and distribute the digital book to the original user 102 or to other people 132, such as friends and relatives. In some implementations, the digital book itself is distributed (116′) (e.g., as an ePub or PDF), which can then be viewed on the recipient's computing device. In some implementations, the digital book is stored only at the web server, and users access the digital book over a network. For example, the user 102 may send a link to other people 132, and clicking on the links directs the recipient's browser to the web server 130 where the digital book is stored.
In some implementations, the memory 214 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices. In some implementations, the memory 214 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. In some implementations, the memory 214 includes one or more storage devices remotely located from the CPU(s) 202. The memory 214, or alternately the non-volatile memory device(s) within the memory 214, comprises a non-transitory computer readable storage medium. In some implementations, the memory 214, or the computer readable storage medium of the memory 214, stores the following programs, modules, and data structures, or a subset thereof:
Each of the above identified executable modules, applications, or sets of procedures may be stored in one or more of the previously mentioned memory devices and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various implementations. In some implementations, the memory 214 may store a subset of the modules and data structures identified above. Furthermore, the memory 214 may store additional modules or data structures not described above.
Although
In some implementations, the data or executable programs illustrated in
In some implementations, events are triggered automatically based on reaching or attaining certain milestones within the game, such as reaching the top of a mountain or volcano, collecting an object, reaching an achievement level, opening a door, and so on. Some implementations have a predefined set of milestones.
In some implementations, at least some of the events are triggered based on a timer. For example, if a certain amount of time has elapsed since the last event (e.g., five minutes), automatically record another event. In some implementations, multiple images are saved for at least some of the events, and the user is later able to decide which image(s) to use for the book that is created. In some implementations, images are recorded for each event at scheduled intervals, such as every 15 seconds. In some implementations, the user can trigger the capture of additional images, which may be stored together in a single event with other images that are captured automatically.
In the second scene 302-2, the user's character 244 is approaching a hole in the ground, and the game application 222 records the event as a second event 260-2, including the image of the scene. In some implementations, the hole is assigned a name or location identifier, which is included with the event.
In the third scene 302-3, the user's character 244 has a conversation with an animal, and the game application records the scene as a third event 260-3. The conversation in the third scene 302-3 is included with the recorded event 260-3.
All three of the events are stored in the database 240. Also stored in the database is the name “Jenny,” which the user 102 has assigned to the character 244. In some instances, the user 102 assigns his or her own name to the character. Although this illustration shows only three events, a typical sequence of recorded event for a game includes many more events (e.g., 10-50 events). In some implementations, multiple game sessions are combined (e.g., when the sessions have continuity, with a subsequent session beginning where a previous session left off).
In some implementations, multimedia elements are stored as part of the recorded events as well. For example, some implementations include video clips from the game, which may show movement within the game, having a swordfight with an evil villain, and so on. In some implementations, the game application 222 includes audio segments, such as talking (e.g., a simulated voice) or sound effects. Some implementations include audio clips from the video game sounds. Some implementations also include video clips or audio clips from the user 102. For example, the user may yell “open sesame” to open a hidden passageway within the virtual environment, and those magic words may be recorded.
The game application 222 receives (506) user input to control actions of a virtual character 244 within the virtual game environment 110. The user 102 may control the actions of the virtual character using various input devices, such as a keyboard, mouse, joystick, trackball, trackpad, or touch screen. The virtual character 244 may be (508) a sentient being (e.g., a human-like creature), an artificial creature or machine (e.g., a robot or a spaceship), a non-sentient organism (e.g., a cell or bacterium), a mythical creature (e.g., a unicorn), or even an inanimate object (e.g., a water droplet). Before beginning the game, some implementations allow the user to select a virtual character, as well as various visual characteristics of the virtual character. For example, in a virtual environment with dinosaurs, the user may be able to select the type of dinosaur, the size or age of the dinosaur, as well as color, texture, or pattern of the dinosaur's body.
Typically, the process displays (510) the virtual game environment and the virtual character on a display device 208 associated with the computing device 104. In some implementations, the game application 222 displays (512) narrative text corresponding to the virtual scene that is displayed. In some of these implementations, the user may choose (514) to save the narrative text with a saved event (e.g., the most recently saved event or the next event to be saved). In some implementations, the user activates (514) this save of narrative text using a user interface control, such as a button or toggle. The activation occurs (514) during user interaction with the virtual game environment 110.
In some implementations, the user assigns (516) a name to the virtual character. In some implementations, the user assigns other attributes of the virtual character as well, such as gender or age. When the user assigns name, gender, age, or other characteristics, some implementations include these characteristics in the simulated pages.
During a game, the game application 222 records (518), without user input, a temporal sequence of events 260 from the game environment 110. The user may be interacting with the environment (e.g., moving the virtual character 244), but no user-action is required to trigger capturing and record the events. However, in some implementations, a user may trigger recording additional events when desired (e.g., by clicking on a user interface control).
Each event 260 represents (520) an interaction of the virtual character 244 with the game environment. The interaction recorded may represent a short period of time (e.g., the second that the user's character reaches the peak of a mountain), or may represent a longer period of time (e.g., having a conversation with another character or the process of climbing the mountain). Each event includes (522) a respective image 262 and respective text 272 describing the respective event. In some implementations, some of the events include (524) multiple images (e.g., two or more images that are captured in quick succession or multiple images of the same scene at the same time taken from different viewpoints).
In addition to the respective text 272, some implementations record (526) a separate caption or title for some of the recorded images 260 (e.g., “The adventure begins”). In some implementations, the separate caption or title is not editable by the user.
In some implementations, some events include (528) one or more labels that identify locations within the virtual game environment at the time the events are recorded. This is illustrated, for example, by the “volcano” label 656 in
In some implementations, some of the events include (532) labels that identify other virtual characters or virtual objects within the virtual game environment 110 at the time the events are recorded. These labels may be predefined by the game application 222 or assigned by the user. For example, an implementation may include a T. Rex character, which has the default name “T. Rex,” but the user could assign another name.
In some implementations, some events include (534) a conversation between the virtual character and one or more other virtual characters within the virtual game environment at the time the events are recorded. This is illustrated above in the third scene 302-3 in
In some implementations, some events include (536) one or more labels that identify virtual objects collected by the virtual character at the time the events are recorded. Some implementations store the collected objects as part of a recorded game state, but in other implementations, the information about objects is stored separately from a game state.
In some implementations, some events include (538) one or more labels that identify achievements of the virtual character at the time the events are recorded. Some implementations store the achievements as part of a recorded game state, but in other implementations, the information about achievements is stored separately from a game state.
The various labels that may be stored with an event can be combined. For example, in some implementations a first event includes (540) one or more labels that identify current game data at the time the first event is recorded. The current game data includes (540) one or more of: named locations within the virtual game environment; a state of the virtual game environment; other virtual characters or virtual objects within the virtual game environment; virtual objects collected by the virtual character; and achievements of the virtual character.
In some implementations, at least a subset of the events are automatically recorded (542), without human intervention, when the virtual character reaches a milestone in the virtual game environment. Typically, implementations have a predefined set of milestones, such as reaching specific locations, performing certain actions, collecting specific virtual objects, attaining certain achievement levels, and so on.
Subsequent to the recording, the book simulation module 224 presents (544) to the user a sequence of simulated pages that correspond to the sequence of recorded events. Each simulated page includes (546) the respective recorded image (or images) for a respective event and includes the respective text describing the event. In some implementations, the respective text is editable, so that the user 102 can customize the story that is created. In some implementations, individual words or phrases are designated as editable, and the book simulation module can provide alternatives for selected words or phrases. This is illustrated in
In some implementations, some events have multiple images. These multiple images can be used in various ways. In some implementations, each event corresponds to a single simulated page, and each simulated page has a single image. In some of these implementations, the user is prompted to select which image is used. In some implementations, two or more images for a single event may be placed onto a single simulated page. The images may be selected by the user. In some implementations, when there are multiple images for a single event, the event corresponds to multiple simulated pages. In some implementations, each of the multiple images is presented on a distinct simulated page. In some implementations, the user is prompted to select which images to keep for simulated pages (e.g., for an event with five images, the user may select two of those images, and the book simulation module 224 creates a separate simulated page for each of the selected images).
In some implementations, a first event has a plurality of images and the corresponding simulated page includes (548) two or more of the multiple images 262. For example, an event may include two images taken close to each other in time, and both are displayed on the simulated page in order to illustrate a change that takes place between the two images (e.g., shooting an arrow in one image and having the arrow hit the target in a second image). In other examples, the two or more images 262 may illustrate different perspectives of the same scene, such as views of the scene as seen by two different virtual characters.
In some implementations where events can include more than one image 262, each simulated page includes (550) a single image. In this case, a single event may span multiple simulated pages.
In some implementations, the respective text for some simulated pages include (552) a plurality of text options, and the user selects one of the options. In some implementations, the text options apply to the text as a whole, but in other implementations, the text options apply to individual words or phrases within the text, as illustrated in
Some implementations include (554) a user-provided name in the respective text on one or more of the simulated pages. For example, the user-provided name “Poul” appears in
In addition to the respective text for each event, some implementations include (556) a separate caption or title for each image. In some implementations, the caption or title can be modified by the user. The caption or title is displayed (556) accompanying the corresponding image. In some implementations, the separate caption is narrative text that appears on the display while a user is interacting with the virtual game environment. In some implementations, a user can choose to save narrative text with an event using an interface control (e.g., a “save” or “record” button).
In some implementations, a first event corresponds (558) to a first simulated page, and presenting the first simulated page to the user includes (558) presenting a plurality of images as alternative options for the first simulated page. The book simulation module 224 then receives (560) user selection of a first image of the plurality of images. In this way users are able to choose images that best represent the stories that they want. In some implementations, a user can also choose to omit all images for some of the events. This can result in a “text only” simulated page or omission of the event from the created digital book.
In some implementations, one or more labels are displayed (562) on some of the simulated pages to identify locations corresponding to the labels. This is illustrated by the label “volcano” 656 in
In some implementations, one or more labels are displayed (566) on some simulated pages to identify other virtual characters or virtual objects. In some implementations, the labels are located adjacent to the corresponding virtual character or virtual object in a simulated page. In some implementations, the labels are connected to the corresponding virtual characters or virtual objects, or there are arrows pointing from the labels to the corresponding virtual characters or virtual objects.
In some implementations, a recorded conversation between the virtual character and one or more other virtual characters is displayed (568) in a textual format on a simulated page.
In some implementations, one or more labels are displayed (570) on some simulated pages to identify virtual objects collected by the virtual character. For example, at the time of an event, the virtual character may have collected a key that will be used later to open a door, so a label or icon representing the collected key may be included on the corresponding simulated page.
In some implementations, one or more labels are displayed (572) on some simulated pages to identify achievements of the virtual character. For example, the virtual character may be recognized for climbing a mountain or slaying a dragon.
Some implementations include (574) at least a portion of recorded narrative text in a simulated page corresponding to a first event. In some implementations, one or more labels that identify current game data at the time a first event was recorded are displayed (576) on a first simulated page.
In some implementations, the sequence of simulated pages includes (578) one or more simulated pages that includes graphics other than images recorded during game play. In some implementations, the graphics include (578) one or more of: a map of at least a portion of the virtual game environment, including a path on the map showing movement of the virtual character within the virtual game environment; a photograph of the user taken by a photo sensor associated with the computing device, where the photograph is taken during the user's interaction with the virtual game environment; an image of virtual objects collected by the virtual character; and an image depicting a certificate of achievement of the virtual character in the virtual game environment. In some implementations, the additional graphics can include clip art, other image files stored on the user's computing device 104, or other images publicly available on the Internet.
In some implementations, one or more simulated pages include (580) multimedia attachments. The multimedia attachments can include (580) one or more of: a video clip from the virtual game environment; an audio clip from the virtual game environment; a video clip of the user interacting with the virtual game environment; and an audio clip of the user interacting with the virtual game environment. Although these multimedia attachments may not be able to be included in a hard copy book, they may be included in a distributed digital book.
Some implementations provide one or more additional simulated pages to display other information related to the story without necessarily including an image for the virtual character's interaction with the game environment. Some implementations include a “status” page that provides various information about the virtual character in the environment. The status may be displayed using various combinations of text and graphics. Some implementations include a “collected items” page that shows visually the items the virtual character has collected. Some implementations include an “achievements” page that display certificates, awards, medals, badges, or other accomplishments by the virtual character. These additional simulated pages may occur at various points in the sequence of simulated pages, such as a point in time when the virtual character collects another object.
An important aspect of the disclosed process is that the user can create or modify the text that is displayed with each simulated page. In some implementations, the book simulation module 224 receives (582) user input to modify the respect text for at least a subset of the simulated pages.
Ultimately, the book simulation module generates (584) a file 226 that includes the sequence of simulated pages as modified by the user 102. In some implementations, the user has selected which images to use, and the user-selected images for at least a first simulated page are used (586) in the generated file.
In some implementations, the file 226 includes additional pages, such as a front cover, a copyright page, a dedication page, a table of contents, an index, chapter headers, and/or a back cover. For these additional pages, the user is prompted to provide or select appropriate text, such as a title. In some implementations, the file has (588) a file type that is one of JPEG, TIFF, BMP, PNG, PDF, EPUB, or MOBI.
In some implementations, a user 102 can create multiple book versions from a single set of events. For example, a user may save a first file 226, then use a “SAVE AS” feature to create one or more additional versions, which can be customized independently of the first saved version. Some implementations also enable a user to create new storybook files 226 based on two or more existing files. In this way, a user can combine interesting parts of multiple stories and omit parts that are not as interesting to the user.
In some implementations, the book simulation module 224 facilitates (590) printing the file 226 to create a tangible book that includes the simulated pages as modified by the user. In some instances, the created file 226 is transmitted to a publisher or bookbinder, which prints and binds a book corresponding to the file.
In some implementations, the book simulation module 224 transmits (592) the file to a remote book printing provider with instructions to ship a bound book corresponding to the file to a specified geographic address. The geographic address may be the address of the user, or the physical address of a friend or relative.
As illustrated in
The process 500 has been described for an implementation in which events are captured during game play and the pages for a corresponding storybook are created after the game play is over. Some implementations vary this overall process. For example, in some implementations, game play may extend over a longer period of time (e.g., days), and may comprise multiple distinct sessions. Some implementations enable a user to create a single storybook from these spread out sessions, particularly when the multiple sessions are conceptually part of a single story line.
In some implementations, the number of captured events may be very large, especially for a story that was constructed by a user over a period of days or weeks. In this case, some implementations allow a user to omit/delete some of the simulated pages so that they do not appear in the saved file 226.
Some implementations provide an integrated book-building feature or option, which allows a user to view and edit the simulated pages as the events occur, or shortly thereafter. For example, in an interactive video game with a plurality of discrete scenes, some implementations enable the user to build the book pages during the transition from one scene to the next scene.
After a game is played, a default digital book may be created that uses the recorded images, captions, labels, and other data.
The screenshots in
Some implementations enable a user to associate each potential location with a label. For example, the label #rabbitHole may be used to identify a virtual rabbit hole in the game. When the user moves the virtual character into the vicinity of the rabbit hole, the #rabbitHole label is added to the set of stored labels. In some implementations, the user may assign a custom label to a location. In addition, an image of the user's view of the location is captured and associated with that label, and recorded in the game log 252. Furthermore, other information about the state of the game, such as names of elements within the user's view, other characters, or the simulated time, are captured and associated with that label in storage. The label, image, and data form an event, and may be encapsulated as one unified element in data storage.
In some implementations, a label is associated with an activity in the game, such as the opening of a virtual door. For example, if the user manipulates the virtual character to open the door (e.g., turning a knob or key), the data for the unified event is recorded. The data may include a label, such as #wardrobeDoor or a custom label.
In some implementations, a label may be associated with the collecting of an item (or items) in the game. For example, the label #berry may be associated with a particular virtual berry in the game. If the user collects the virtual berry, the data for that unified event is recorded in the game log 252.
In some implementations, a label may be associated with the interaction between a user's character and a conversational agent (e.g., another character). The user's character may ask questions or answer questions, and the entire conversation is recorded as part of an event. For example, the label #howOld may be associated with a question or set of questions asked by the user's character.
In some implementations, a label may be associated with the accomplishment of a goal, task or mission. The accomplishment by the virtual character leads to the achievement of an award within the game. For example, the label #climbedVolcano may be associated with the goal of climbing a virtual volcano within the game. If the user achieves the goal (i.e., the virtual character in the game climbs the volcano), the data for that unified event is recorded in the game log 252.
In some implementations, each label is associated with a textual component that provides a template 248 for a caption to be associated with an image. In some implementations, the template is represented as a sequence of words, separated by punctuation to indicate which elements of the paragraph are fixed and which elements have alternatives. In other implementations, the template is represented by an XML document that is isomorphic with the punctuated sequence of words. For example, the sequence of words that corresponds to
The user may select to view the recording of the interactions (events) within the game, and this is presented to the user as a set of simulated pages from a virtual book. On each page is a rendering of the image corresponding to the event, accompanied by a paragraph caption that provides a narrative description of the event. The caption is associated with one or more labels that corresponds to the event, and may be represented within the game using one of the implementations described above. The user may interact with the language of the paragraph, by making choices of alternative words as illustrated in
In addition to page captions, some implementations also render the captured user activity associated with each recorded event. For example, in the case of a recorded event that corresponds to an interaction with a conversational agent, the questions addressed to the agent, along with their responses, may be rendered on the page.
In some implementations, the final step is the production of a printable format of the book that can be printed or shared with others via available electronic and digital communication mechanisms.
In some implementations, after a digital book is created, a user may read the book aloud and record the reading as part of the digital book. For example, a child may create a book representing the interaction with the game, narrate the book, and transmit a copy to a grandmother, who can see the images and hear the story as read by the grandchild. In some implementations, a user can repeat the recording of the audio multiple times so that the user can save a good recording. In some implementations, audio recordings can be created after the digital book is created and distributed. Note that the audio narration can be created by anyone, and not necessarily the user who created the interaction for the book. In some implementations, attached audio files are created for each page separately. In other implementations, each digital book can have a single audio file.
The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the invention. As used in the description and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof.
The foregoing description, for purpose of explanation, has been described with reference to specific implementations. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The implementations described herein were chosen and described in order to explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various implementations with various modifications as are suited to the particular use contemplated.