Related subject matter is disclosed in the following patent application, which is commonly owned and co-pending with the present application, and the entire contents of which are hereby incorporated by reference: U.S. patent application Ser. No. 13/857,550, filed herewith, titled “INTERACTIVE METHOD AND APPARATUS FOR MIXED MEDIA NARRATIVE CONSUMPTION”.
The present invention relates to a system for generating a mixed media narrative presentation.
Narratives or stories are commonly available for electronic presentation on a computing device, such as a laptop or tablet computer or a cellular phone, and are increasingly available in more than one medium of expression. For example, a narrative may be available as an electronic book (e-book), an audio book, a video, a television program and/or a comic strip/book, a group of cartoons arranged in a narrative sequence. Typically, an entire narrative will be presented in a single, consumer selected medium, for example, an audio book. However, with increased availability of narratives in differing mediums of expression, interest in comparing a narrative or a portion of a narrative when presented in different media or in experiencing a multi-media presentation of a narrative has increased.
A narrative or story may be expressed in one or more media. For examples, narratives are commonly expressed as a video, an electronic book (e-book), an audio book, a television program and/or comic book/strip, a group of cartoons arranged in a narrative sequence. In embodiments, a method and apparatus are provided for presenting a narrative comprising portions presented in respective user selectable medium(s) of expression.
An embodiment provides a system for presenting a mixed media narrative comprises a memory storing data accessible by a data processing unit, the data including a segment of a first medium expression of a narrative and a segment of a second medium expression of the narrative and data identifying the segment of the first medium expression as substitutable for the segment of the second medium expression; and a data processing unit arranged to access the data stored in the memory and present the segments of the narrative to a user, the data processing unit presenting the segment of the first medium expression to the user during a presentation of the second medium expression if a preference is expressed by the user preference for the first medium expression and if data in the memory identifies the segment of the first medium expression as substitutable for the segment of the second medium expression.
A further embodiment provides an apparatus for presenting mixed media narratives comprising a media reconciler arranged to define segments of a first medium expression of the narrative and to define segments of a second medium expression of the narrative; a media interchange linker arranged to analyze the segments of the first medium expression and the segments of said second medium expression and, if a segment of the first medium expression is substitutable for a segment of the second medium expression, to associate a data with the segment of the first medium expression indicating that the segment of the second medium expression is substitutable for the segment of the first medium expression; and a data processing unit to present to a user, during a presentation of the first medium expression of the narrative, the segment of the second medium expression if the second medium expression is selected by the user and the data indicates that the segment of said second medium expression is substitutable for the segment of the first medium expression.
A still further embodiment provides a method for presenting a mixed media narrative includes presenting a first segment of a first medium expression of the narrative to a user; and presenting a first segment of a second medium expression of the narrative to the user if the user signals a preference for the second medium expression and if the first segment of the second medium expression is substitutable for a second segment of the first medium expression of the narrative.
Referring in detail to the drawings where similar parts are identified by like reference numerals, and, more particularly to
A narrative commonly comprises a sequence of narrative segments. For example, a digital video typically comprises a plurality of sequential scenes, each comprising a succession of frames or images and an audio track which may include dialogue, music and sound effects. Books, in either text or audio form, commonly comprise a series of chapters, each, typically, comprising a plurality paragraphs each of which, in turn, comprises one or more sentences including topic and supporting sentences and dialogue. Referring also to
In the mixed media presentation system 22, the segmented narrative expression may be transmitted to either an automated media interchange suggestion module 38 or a media interchange authoring portal 40. In the automated media interchange suggestion module 38 the data processing unit 28 analyzes the segments of plural medium expressions, for examples, segments 64 of the first medium expression 62 of the narrative and the metadata describing the respective segments, and compares the segments of the first medium expression to respective segments 68 and associated metadata of the second medium expression 66, for example an audio book, to determine which segments of the first medium expression are substitutable for the segments of the second medium expression. The automated media interchange suggestion module adds metadata 70 to each segment of the plural expressions of the narrative linking a segment 64 of the first expression 62 to one or more corresponding segments 68 of the narrative in the second medium expression 66 and vice versa. For example, segments of a video audio track containing a character's dialogue may be linked to segments of an audio book or an e-book where the character is quoted enabling substitution of the video actor's dialogue for the narrator's dialogue or the text expressing a character's dialogue. The system provides plural levels of granularity enabling mapping and substitution of dialogue in differing media expressions of the narrative, such as substitution of a scene from one video expression for a scene of a second video expression or linking of a video scene or image or sound effect to a chapter or a paragraph of an e-book or an audio book permitting simultaneous presentation of a video scene or image while the audio or sound effect is output or the e-book text is presented on a second or a divided display.
In the media interchange authoring portal 40, an author 42 interested in developing a mixed medium narrative may manually segment a narrative and/or add metadata 70 linking segments of an expression of a narrative to the segments of a second expression of the narrative which has been segmented and stored in the mixed media presentation system's memory.
When the segments of the plural expressions of a narrative have been linked in either the automated media interchange suggestion module 38 or the media interchange authoring portal 40, the segmented narratives including the segment descriptive metadata linking the segments of the different media expressions are stored in the mixed media presentation module database 32. Referring also to
Referring also to
The system memory 152 may include nonvolatile and/or volatile computer accessible storage media which may be implemented in any method or technology suitable for storing computer-readable information, such as computer readable instructions, data structures, program modules, narrative data or other data. Computer storage media includes, but is not limited to, random access memory (RAM); read-only memory (ROM); EEPROM; flash memory; optical storage, such as digital versatile disk (DVD), and magnetic storage devices. A basic input/output (BIOS) system containing basic routines that aid in transferring information between elements within the computing device, such as, during start-up, is typically stored in non-volatile memory. Data and/or program modules, such as an operating system and application programs and data, are also typically stored in non-volatile memory, such as flash memory or magnetic disk storage, but may be copied to volatile memory, such as RAM, for immediate accessibility and/or utilization by the processing unit.
The user's computing device 26 also typically includes a communication interface 156, and can comprise communication media embodying computer-readable instructions, data structures, program modules or other data. Information may be communicated, for example, using a modulated data signal having one or more characteristics changeable in a manner to encode information in the signal, such as a carrier wave or other data transport medium. By way of example, but not limitation, communication media includes wired media such as a wired network or a direct-wired connection and wireless media such as acoustic, radio frequency (RF) 158, infrared and other wireless media or any combination of computer readable communication media.
Typically, the user's computing device includes a monitor or other display device 162 for visually presenting data, including video and text data, to the user. The display is commonly connected to the processing unit via a video interface 164. In addition, the user's computing device 26 commonly incorporates an audio output device, such as a speaker 170 and/or headphones 172 interconnected to the system bus by an output peripheral interface 171. A user may enter commands and information into the computing device through one or more input devices such as keyboard 174 or a pointing device 175, such as a mouse, trackball or touch pad, a microphone or a game pad, which is connected to the processing unit by an input device interface 176. The user's computing device may also comprise a virtual input mechanism such as a virtual keyboard or pointing device operated by touch, stylus or gesture interaction 168 with the monitor 162 and in communication with the data processing unit 156 by a touch/gesture controller which may, for example, be part of the video interface 164.
When the user inputs a command to the user's computing device 26 requesting a menu of available narratives, the request is transmitted 102 to the data processing unit 28 of the mixed media presentation system 22 by the communication system interconnecting the two devices. Referring also to
From the menu of available narratives and narrative expressions presentable on the user's computing device, the user can select a preferred narrative and medium of expression to be presented. The user preference for a narrative and a medium of expression is transmitted 108 from the user's computing device to the data processing unit 28 of the mixed media presentation system 22 and stored in the mixed media presentation system memory 30. Alternatively, a user profile 44 including media preferences previously selected by the user may be stored in the memory 30 of the mixed media presentation system 22 or stored on the user's computing device for transmittal to the mixed presentation system when the user requests a menu of available narrative expressions. In addition, statistical information related to media interchanges requested by past users or groups of users, such as social media associates of the user, of a narrative may be stored in the memory of the mixed media presentation system and presented to a user as a graphic or other representation of the popularity of a particular preference, for example, as a heat map, enabling the user select more commonly requested segments and media when a narrative is presented.
Referring also to
In an embodiment, a mixed media experience would be improved by a novel interactive user interface for the user's computing device 26 enabling a user to monitor and control the progress of a mixed media narrative presentation and enabling the user to select the media in which portions of the narrative are presented. Referring to
The mixed media user interface 200 also includes a media selector 214. When the user's computing device 26 receives a segment of the narrative from the mixed media presentation system 22, the narrative segment may be accompanied by metadata describing the presentation options, including the available media expressions, for the next segment of the narrative. The media selector 214 displays the medium expression options, for example, video 216, audio 218 and e-book 220 for the next narrative segment. The user may select a new medium expression for the next narrative segment by selecting one of the available medium expression options, for example, by touching 222 a portion of the monitor displaying the desired one of the available media options. Alternatively, the mixed media presentation module may recover a segment of the narrative in the new medium of expression that is substitutable for the segment being presented and transmit that segment to the user's computing device for presentation after completion of the presentation of the current segment or the segments of plural media expressions may be presented simultaneously on differing output devices of the user's computing device, for examples, audio and video may be simultaneously presented with a display and speakers or text and images may be presented simultaneously on two displays or a display divisible into plural windows. In this case, the media selector indicates the media options for the segment that is being currently presented. The mixed media presentation module continues to recover successive segments of the narrative from memory in the user selected medium of expression and transmit those segments to the user's computing device until the user selects another medium of expression or interrupts the presentation by selection of the presentation icon 212 or until the last segment of the narrative has been presented.
Referring to
The user may elect to have the narrative presented as an ordered succession of segments by selecting puzzle pieces corresponding to one of the possible plotlines and then selecting the area of the interface denoted by the “Start of Narrative” legend 266 with a pointer or other selection mechanism available on the user's computing device. The computing device will then select an ordered progression of the narrative segments, each represented by an adjacent puzzle piece, ending with the segment 268 adjacent to the “End of Narrative” legend 270 or the last segment selected by the user.
On the other hand, the segments of the requested narrative may be presented in a random manner by selecting an individual puzzle piece 254, for example, the piece 263. Selected segments of the narrative will obtained from the mixed media presentation memory 32 and transmitted to the user's computing device for presentation.
The available media for presenting the next segment of an ordered presentation of a narrative or a selected segment of a narrative may be indicated to the user by either a media selection area of the interface displaying controls 270 enabling selection of one of the mediums available for presentation of the segment or by pop-up menu 272 which appears when the user engages a respective puzzle piece with a pointing device. The pop-up menu 272 may include an indicator 274 of a preferred medium for the segment based on the medium selected for adjacent segments or a prior selection by the user or selection by a group of users or otherwise. When the interface indicates that plural media are available for presentation of a segment, the user may select a preferred medium by selecting one of the available media 276. The identity of the preferred medium is transmitted to the data processing unit 28 of the mixed media presentation system 22 which recovers the segment in the designated medium from the mixed media presentation memory 32 and transmits the segment to the user's computing device 26 for presentation. A progress icon 278 displayed on the interface indicates which segment is currently being presented and may include an indication 279 of the state of completion of the segment's presentation. A presentation control icon 280 enables selective control of the progression of the narrative presentation and indicates whether the presentation is proceeding or, as illustrated, has been halted by the user. As each segment is presented, a label 282 is superimposed on the corresponding puzzle piece indicating the medium in which that segment of the narrative was presented.
Referring also to
To further enhance the user's experience, a user interface enabling a user to select narrative segments in a game where the user may compete against a clock to fit the segments into a complete narrative may be presented on the user's computing device 26. Referring also to
The mixed media narrative presentation system enables a user to select and mix the mediums in which segments of a narrative are presented.
The detailed description, above, sets forth numerous specific details to provide a thorough understanding of the present invention. However, those skilled in the art will appreciate that the present invention may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuitry have not been described in detail to avoid obscuring the present invention.
The terms and expressions that have been employed in the foregoing specification are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding equivalents of the features shown and described or portions thereof, it being recognized that the scope of the invention is defined and limited only by the claims that follow.
Number | Name | Date | Kind |
---|---|---|---|
4729044 | Kiesel | Mar 1988 | A |
6289165 | Abecassis | Sep 2001 | B1 |
6317795 | Malkin | Nov 2001 | B1 |
7512886 | Herberger et al. | Mar 2009 | B1 |
7716572 | Beauregard et al. | May 2010 | B2 |
8009966 | Bloom et al. | Aug 2011 | B2 |
8972265 | Lester | Mar 2015 | B1 |
20020078144 | Lannkin | Jun 2002 | A1 |
20030046638 | Thompson | Mar 2003 | A1 |
20050042591 | Bloom | Feb 2005 | A1 |
20070106516 | Larson | May 2007 | A1 |
20090300699 | Casagrande | Dec 2009 | A1 |
20100209003 | Toebes et al. | Aug 2010 | A1 |
20100325657 | Sellers | Dec 2010 | A1 |
20110106970 | Song et al. | May 2011 | A1 |
20110200116 | Bloch et al. | Aug 2011 | A1 |
20110231474 | Locker | Sep 2011 | A1 |
20120041954 | Curtis et al. | Feb 2012 | A1 |
20120158706 | Story, Jr. | Jun 2012 | A1 |
20120216121 | Lin | Aug 2012 | A1 |
20120324324 | Hwang et al. | Dec 2012 | A1 |
20120330756 | Morris | Dec 2012 | A1 |
20130124664 | Fonseca, Jr. et al. | May 2013 | A1 |
20130125188 | Mandalia et al. | May 2013 | A1 |
20130132521 | Fonseca, Jr. et al. | May 2013 | A1 |
20130144725 | Li et al. | Jun 2013 | A1 |
20130198602 | Kokemohr | Aug 2013 | A1 |
20130212454 | Casey | Aug 2013 | A1 |
20130212611 | Van Aacken | Aug 2013 | A1 |
20130246567 | Green | Sep 2013 | A1 |
20130268826 | Nowakowski | Oct 2013 | A1 |
20130287365 | Basapur et al. | Oct 2013 | A1 |
20130290488 | Mandalia et al. | Oct 2013 | A1 |
20130290859 | Venkitaraman et al. | Oct 2013 | A1 |
20130290892 | Basapur et al. | Oct 2013 | A1 |
20130298179 | Baum et al. | Nov 2013 | A1 |
20130346414 | Smith et al. | Dec 2013 | A1 |
20130346631 | Gandhi et al. | Dec 2013 | A1 |
20130347017 | Li et al. | Dec 2013 | A1 |
20130347057 | Hurwitz et al. | Dec 2013 | A1 |
20140009476 | Venkitaraman et al. | Jan 2014 | A1 |
20140028917 | Smith et al. | Jan 2014 | A1 |
20140089967 | Mandalia et al. | Mar 2014 | A1 |
20140095608 | Mandalia et al. | Apr 2014 | A1 |
20140098293 | Ishtiaq et al. | Apr 2014 | A1 |
20140176604 | Venkitaraman et al. | Jun 2014 | A1 |
20140181160 | Novak et al. | Jun 2014 | A1 |
20140281989 | Clark | Sep 2014 | A1 |
20140289622 | Riggs | Sep 2014 | A1 |
Number | Date | Country |
---|---|---|
2209311 | Jul 2010 | EP |
2008117926 | Oct 2008 | WO |
2011097405 | Aug 2011 | WO |
Entry |
---|
PCT Search Report, Re: Application #PCT/US2014/031182; dated Aug. 8, 2014. |
PCT Search Report, Re: Application #PCT/US2014/031177; dated Aug. 14, 2014. |
A. Ghandar, et al “Pattern Puzzle: A Metaphor for Visualizing Software Complexity Measures”, APVis '06 Proceedings of the Asia-Pacific Symposium on Information Visualization, vol. 60, Feb. 1-3, 2006, pp. 221-224. |
F. Ritter, et al., “Using a 3D Puzzle as a Metaphor for Learning Spatial Relations”, In Proceedings of Graphics Interface, 2000 pp. 171-178. |
Amazon Whispersync UI, retrived from URL <http://www.amazon.com/gp/help/customer/display.html/ref=sr_1_1_acs_h_1?ie=UTF8&nodeId=200911660&qid=1405085994&sr=8-1-acs> on Jul. 11, 2014. |
“Automatic Mashup Generation of Multiple-camera Videos”, Philips Electronics N.V. 2009, 165 pgs. |
Official Action, Re: Canadian Application No. 2,912,324, dated Dec. 15, 2016. |
Official Action, Re: Korean Application No. 10-2015-7028699, dated Jun. 8, 2017. |
Official Action, Re: Mexican Application No. MX/a/2015/012408, dated Oct. 2, 2017. |
EPC Examination Report, RE: Application No. 14724874.4, dated Oct. 13, 2017. |
Official Action, Re: Korean Application No. 10-2015-7028699, dated Aug. 16, 2016. |
Official Action, RE: Canadian Application No. 2,912,324, dated Dec. 12, 2017. |
Official Action, RE: Chinese Application No. 201480016884.8, dated Jan. 23, 2018. |
Official Action, RE: Mexican Application No. MX/a/2015/012408, dated Mar. 22, 2017. |
Official Action, RE: Korean Application No. 10-2015-7028699, dated Mar. 31, 2017. |
Official Action, Re: Mexican Application No. MX/a/2015/012408, dated Mar. 5, 2018. |
Official Action, RE: Mexican Application No. MX/a/2015/012408, dated Jul. 11, 2018. |
Number | Date | Country | |
---|---|---|---|
20140289625 A1 | Sep 2014 | US |
Number | Date | Country | |
---|---|---|---|
61803312 | Mar 2013 | US |