Claims
- 1. In a television entertainment system, a method for substantially reducing an amount of bandwidth used to deliver broadcast data, the method comprising:
identifying, for reuse during transcoding operations, substantially similar layers across multiple pages of Web content; and transcoding the pages into a program comprising multiple video components, the pages being transcoded such that a layer that is similar across multiple ones of the pages is not encoded into a respective video component for each similar occurrence of the layer, the layer and all other similar layers being represented in the program with a single still of the video components and metadata.
- 2. A method as recited in claim 1, wherein the layer is either a background layer, an image layer, or a text layer.
- 3. A method as recited in claim 1, wherein the pages are in a Hypertext Markup Language (HTML) data format.
- 4. A method as recited in claim 1, wherein the video components are Moving Pictures Experts Group (MPEG) stills.
- 5. A method as recited in claim 1, wherein before transcoding, the method further comprises authoring one or more of the pages to indicate individual layers.
- 6. A method as recited in claim 1, wherein before transcoding, the method further comprises fetching the Web content from an external content provider, and wherein the transcoding is performed by a server at a cable head-end.
- 7. A method as recited in claim 1, wherein transcoding further comprising encoding multiple video components from a single page of the pages, each of the multiple video components corresponding to a respective page layer.
- 8. A method as recited in claim 1, wherein multiple pages of the Web content comprise individual instances of substantially similar layers, and wherein transcoding further comprises:
rendering one of the individual instances for only a first page of the multiple pages to generate the single shared video component; and referencing the single shared video component in metadata corresponding to each other page of the multiple pages that is not the first page.
- 9. A method as recited in claim 1, wherein transcoding further comprises, for each video component, assigning a temporal reference to indicate a decode order for the client.
- 10. A method as recited in claim 1, further comprising delivering the video components to the client in decode or non-decode order.
- 11. A method as recited in claim 1, wherein transcoding further comprises:
encoding a background layer as an intra picture; encoding an image layer of the layers as a predicted picture, the predicted picture being calculated from the intra picture; and wherein the intra picture and the predicted picture are video components.
- 12. A method as recited in claim 11, wherein the predicted picture is a first predicted picture, and wherein transcoding further comprises encoding a text layer as a second predicted picture, the second predicted picture being based on the first predicted picture, the second predicted picture being a video component of the video components.
- 13. A method as recited in claim 1, wherein the metadata identifies client presentation layout characteristics of the video components, and wherein transcoding further comprises:
for a page of the pages:
(a) extracting text from a layer of the layers; (b) encoding the text into the metadata; and (c) rendering the layer as a bitmap that does not include the text.
- 14. A method as recited in claim 13, wherein extracting the text further comprises extracting text attributes from the layer, and wherein transcoding further comprises embedding the text attributes into metadata that corresponds to the page.
- 15. A computer-readable medium comprising computer-program instructions executable by a processor to perform operations as recited in the method of claim 1.
- 16. A head-end server comprising a processor coupled to a computer-readable medium comprising computer-program instructions executable by the processor, the computer-program instructions performing operations as recited in the method of claim 1.
- 17. A computer-readable medium comprising computer program instructions executable by a processor, the computer-program instructions comprising instructions for:
for a plurality of interface pages, individual ones of which have multiple component layers, identifying at least two instances of a substantially similar layer of the multiple component layers, multiple ones of the interface pages having respective instances of the substantially similar layer; and transcoding the interface pages into an interactive walled garden program (iWGP) such that the at least two instances are represented in the iWGP with a single video still.
- 18. A computer-readable medium as recited in claim 17, wherein transcoding further comprises:
encoding the substantially similar layer for a first one of the multiple ones into the single video component; and for individual ones of the multiple ones that are not the first one, referencing the single video component in metadata for layer reuse.
- 19. A computer-readable medium as recited in claim 17, wherein the iWGP is in a Hypertext Markup Language (HTML) data format, and wherein the single video still is in an Interactive Moving Pictures Experts Group (MPEG) data format.
- 20. A computer-readable medium as recited in claim 17, wherein the multiple component layers represent either a background layer, an image layer, or a text layer.
- 21. A computer-readable medium as recited in claim 17, before the instructions for identifying, further comprising computer-program instructions for downloading the interface pages from an external Web data source.
- 22. A computer-readable medium as recited in claim 17, wherein the computer-program instructions further comprise instructions for delivering the iWGP as multiple video components and corresponding interaction model metadata to a client in a television entertainment system, the multiple video components being delivered for receipt by the client in decode or non-decode order.
- 23. A computer-readable medium as recited in claim 17, wherein the iWGP comprises multiple video components, and wherein the computer-program instructions for transcoding further comprise instructions for:
encoding a background video component as an intra picture; encoding an image video component as a first predicted picture based on the intra picture; and encoding a text video component as a second predicted picture based on the first predicted picture.
- 24. A computer-readable medium as recited in claim 17, wherein the computer-program instructions for transcoding further comprise instructions for:
for at least one page of the interface pages:
(a) extracting text; and (b) encoding the text into metadata for delivery to a client, the text not being represented in the iWGP as a video component.
- 25. A cable head-end server coupled over a network to an external data source and a client computing device, the server comprising:
a processor; and a memory coupled to the processor, the memory comprising computer-program instructions that are executable by the processor for:
downloading Web content from the external data source; identifying multiple instances of similar content across multiple pages of the Web content; and transcoding the Web content into multiple video stills and corresponding interaction model metadata, the transcoding being performed such that a single instance of the multiple instances is referenced by the interaction model metadata for all of the multiple pages, the single instance only being rendered into a still of the multiple video stills for a particular one page of the multiple pages.
- 26. A cable head-end server as recited in claim 25, wherein the similar content corresponds to a particular layer of multiple component layers, each page of the multiple pages comprising the multiple component layers.
- 27. A cable head-end server as recited in claim 25, wherein the similar content is a background layer, an image layer, or a text layer.
- 28. A cable head-end server as recited in claim 25, wherein the Web content is in a Hypertext Markup Language (HTML) data format.
- 29. A cable head-end server as recited in claim 25, wherein the computer-program instructions further comprise instructions for delivering the multiple video stills to the client in decode or non-decode order, a decode order being specified by the interaction model metadata.
- 30. A cable head-end server as recited in claim 25, wherein the Web content comprises multiple interface pages each of which consist of multiple component layers, and wherein the computer-program instructions for transcoding the Web content further comprise instructions for:
encoding a background layer of the multiple component layers as an intra picture; encoding an image layer of the multiple component layers as a first predicted picture that is predicted from the intra picture; and encoding a text layer of the multiple component layers as a second predicted picture that is predicted from the first predicted picture.
- 31. A cable head-end server as recited in claim 25, wherein the Web content comprises multiple interface pages each of which include multiple component layers, and wherein the computer-program instructions for transcoding the interface pages further comprise instructions for:
for at least one interface page:
(a) extracting text from a layer of the multiple component layers; and (b) encoding the text into the interaction model metadata for delivery to a client, the text not being rendered into a bitmap representing the layer.
- 32. A head-end server in a television entertainment infrastructure, the head-end server comprising:
downloading means for downloading Web content comprising a plurality of interface pages, each interface page comprising a plurality of layers, each layer being a particular one type of multiple layer types; and transcoding means for encoding the interface pages into a program, the program comprising a plurality of video components and metadata, the encoding being performed such that a layer of the layers that is substantially similar across multiple ones of the interface pages is represented in the video components with a single still corresponding to a first page of the multiple ones, the metadata referencing the single still such that it is reused by a client in the television entertainment infrastructure to present information corresponding to each other page of the multiple ones that is not the first page.
- 33. A head-end server as recited in claim 32, further comprising broadcasting means for delivering the program in decode or non-decode order to the client.
- 34. A head-end server as recited in claim 32, wherein the transcoding means further comprises:
for at least one page of the interface pages:
(a) extracting means to remove text from a layer of the layers; and (b) transferring means to encode the text into the metadata, the transferring being performed such that the text is not rendered into a video component representing the layer.
- 35. In a television entertainment system, a head-end server coupled to one or more clients, a method comprising:
receiving, by the client, broadcast data from the head-end server, the broadcast data comprising a plurality of video components and interaction model metadata; and decoding, based on information in the interaction model metadata, multiple ones of the video components to represent a single still in the program; and presenting the single still to an end user.
- 36. A method as recited in claim 35, wherein the video components are in an MPEG data format.
- 37. A method as recited in claim 35, wherein the client is a set-top box.
- 38. A method as recited in claim 35, wherein the single still is a first still, and wherein the method further comprises:
determining from the metadata that a particular video component of the multiple ones is not directly rendered into the broadcast data corresponding to the first still; and responsive to determining, retrieving the particular video component from broadcast data directly rendered for a second still, a location of the particular video component in the broadcast-ready-data being referenced by interaction model metadata corresponding the first still.
- 39. A method as recited in claim 36, wherein individual video components represent a respective layer of a plurality of layer types, the layer types comprising a background layer, an image layer, and a text layer.
- 40. A method as recited in claim 39, wherein decoding multiple ones of the video components to represent a single still further comprises:
determining from the metadata that text for a text layer is in the interaction model metadata; extracting the text from the interaction model metadata; and wherein presenting further comprises rendering the text onto a video component corresponding to the text layer.
- 41. A computer-readable medium comprising computer-program instructions executable by a processor to perform a method as recited in claim 35.
- 42. A set-top box comprising a processor coupled to a computer-readable medium comprising computer-program instructions executable by a processor to perform a method as recited in claim 35.
- 43. In a television entertainment system, a method for presenting broadcast data, the method comprising:
receiving broadcast data comprising a plurality of video components and a metadata component; and decoding multiple ones of the video components according to information in the metadata to present a single image on a display, a particular video component of the multiple ones being shared between the single image and a different image such that only one instance of the particular video component is contained in the broadcast data.
- 44. A method as recited in claim 43, wherein metadata associated with the single image includes a reference to the particular video component, the particular video component having been transcoded for the different image, the particular video component not having been transcoded for the single image.
- 45. A method as recited in claim 43, wherein the video components are received over an in-band communication channel, and wherein the metadata is received over an out-of-band channel.
- 46. A method as recited in claim 43, wherein individual ones of the multiple ones of the video components respectively represent a background layer, an image layer, and a text layer.
- 47. A method as recited in claim 43, wherein text corresponding to the single image is encoded into the metadata, and wherein the multiple ones of the video components respectively represent a background layer and an image layer, the background layer being represented as an intra picture, the image layer being represented as a predicted picture based on the intra picture.
- 48. A method as recited in claim 43, wherein the method further comprises extracting text from the metadata to render the text into an on-screen-display buffer corresponding to the single image presented on the display.
- 49. A method as recited in claim 43, wherein the method further comprises:
decoding a portion of the metadata to render a hot-spot layer over the single image; and wherein the hot-spot layer provides an interaction model for a user to interface with displayed components of the single image.
- 50. A method as recited in claim 43, wherein the information identifies a respective temporal reference for each of the multiple ones of the video components, and wherein decoding further comprises determining a decode order for individual ones of the multiple ones based on the respective temporal reference.
- 51. A computer-readable medium comprising computer-program instructions executable by a processor to perform operations as recited in the method of claim 43.
- 52. A set-top box comprising a processor coupled to a computer-readable medium comprising computer-program instructions executable by the processor to perform operations as recited in the method of claim 43.
RELATED APPLICATIONS
[0001] This patent application is related to the following copending U.S. applications:
[0002] U.S. application Ser. No. 10/154,622, titled “Systems and Methods to Reference Resources in a Television-Based Entertainment System”, filed on May 22, 2002, and hereby incorporated by reference; and
[0003] U.S. application Ser. No. ______, titled “Systems and Methods for Dynamic Conversion of Web Content to an Interactive Walled Garden Program”, filed on ______, and hereby incorporated by reference.