The application relates generally to multiview presentations on high definition (HD)/ultra high definition (UHD) video displays.
HD and UHD displays such as 4K and 8K displays (and higher resolutions envisioned) offer large display “real estate” of remarkable resolution.
Accordingly, a device includes at least one computer memory that is not a transitory signal and that in turn includes instructions executable by at least one processor to provide at least a first template defined by at least one extensible markup language (XML) file and/or Javascript. The first template defines segmented tiles of content that can be displayed simultaneously on a display. The instructions are executable to present in each tile content from a respective source of content. The sources of content for each tile are unique to the respective tiles relative to other tiles such that multiple sources of content are displayed in a synchronized fashion.
The template may be defined in part based on respective types of content to be presented in the tiles and/or based on end user preferences. Each source of content can be controlled independently of other sources of content.
In examples, the instructions can be executable to process each source of content as an object within a video canvas of the display. If desired, sizes of the tiles may be defined at least in part by the respective type of the respective sources of content for the tiles. In some embodiments, the instructions are executable to dynamically resize at least one tile.
In another aspect, a method includes wrapping templated content streams coming from an application in a hypertext markup language (HTML) and Javascript web application that accesses the various content streams from respective physical media. The application organizes the content streams into respective tiles for simultaneous presentation of the content streams. The method includes presenting the tiles simultaneously on a display.
In another aspect, an apparatus includes a display, a processor, and a computer memory with instructions executable by the processor present on a display a first window displaying video from a video disk player. The instructions are executable to present on the display a second window simultaneously with the first window displaying video from a video recorder (DVR), and to present on the display a third window simultaneously with the first window displaying video from a portable memory. The instructions are further executable to present on the display a fourth window simultaneously with the first window displaying video from a multiple systems operator (MSO).
The details of the present disclosure, both as to its structure and operation, can be best understood in reference to the accompanying drawings, in which like reference numerals refer to like parts, and in which:
This disclosure relates generally to computer ecosystems including aspects of consumer electronics (CE) device based user information in computer ecosystems. A system herein may include server and client components, connected over a network such that data may be exchanged between the client and server components. The client components may include one or more computing devices including portable televisions (e.g. smart TVs, Internet-enabled TVs), portable computers such as laptops and tablet computers, and other mobile devices including smart phones and additional examples discussed below. These client devices may operate with a variety of operating environments. For example, some of the client computers may employ, as examples, operating systems from Microsoft, or a Unix operating system, or operating systems produced by Apple Computer or Google. These operating environments may be used to execute one or more browsing programs, such as a browser made by Microsoft or Google or Mozilla or other browser program that can access web applications hosted by the Internet servers discussed below.
Servers may include one or more processors executing instructions that configure the servers to receive and transmit data over a network such as the Internet. Or, a client and server can be connected over a local intranet or a virtual private network. A server or controller may be instantiated by a game console such as a Sony Playstation®, a personal computer, etc.
Information may be exchanged over a network between the clients and servers. To this end and for security, servers and/or clients can include firewalls, load balancers, temporary storages, and proxies, and other network infrastructure for reliability and security. One or more servers may form an apparatus that implement methods of providing a secure community such as an online social website to network members.
As used herein, instructions refer to computer-implemented steps for processing information in the system. Instructions can be implemented in software, firmware or hardware and include any type of programmed step undertaken by components of the system.
A processor may be any conventional general purpose single- or multi-chip processor that can execute logic by means of various lines such as address lines, data lines, and control lines and registers and shift registers.
Software modules described by way of the flow charts and user interfaces herein can include various sub-routines, procedures, etc. Without limiting the disclosure, logic stated to be executed by a particular module can be redistributed to other software modules and/or combined together in a single module and/or made available in a shareable library.
Present principles described herein can be implemented as hardware, software, firmware, or combinations thereof; hence, illustrative components, blocks, modules, circuits, and steps are set forth in terms of their functionality.
Further to what has been alluded to above, logical blocks, modules, and circuits described below can be implemented or performed with a general purpose processor, a digital signal processor (DSP), a field programmable gate array (FPGA) or other programmable logic device such as an application specific integrated circuit (ASIC), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor can be implemented by a controller or state machine or a combination of computing devices.
The functions and methods described below, when implemented in software, can be written in an appropriate language such as but not limited to C#or C++, and can be stored on or transmitted through a computer-readable storage medium such as a random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), compact disk read-only memory (CD-ROM) or other optical disk storage such as digital versatile disc (DVD), magnetic disk storage or other magnetic storage devices including removable thumb drives, etc. A connection may establish a computer-readable medium. Such connections can include, as examples, hard-wired cables including fiber optics and coaxial wires and digital subscriber line (DSL) and twisted pair wires.
Components included in one embodiment can be used in other embodiments in any appropriate combination. For example, any of the various components described herein and/or depicted in the Figures may be combined, interchanged or excluded from other embodiments.
“A system having at least one of A, B, and C” (likewise “a system having at least one of A, B, or C” and “a system having at least one of A, B, C”) includes systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.
Now specifically referring to
Accordingly, to undertake such principles the AVDD 12 can be established by some or all of the components shown in
In addition to the foregoing, the AVDD 12 may also include one or more input ports 26 such as, e.g., a USB port to physically connect (e.g. using a wired connection) to another CE device and/or a headphone port to connect headphones to the AVDD 12 for presentation of audio from the AVDD 12 to a consumer through the headphones. The AVDD 12 may further include one or more computer memories 28 that are not transitory signals, such as disk-based or solid state storage (including but not limited to flash memory). Also in some embodiments, the AVDD 12 can include a position or location receiver such as but not limited to a cellphone receiver, GPS receiver and/or altimeter 30 that is configured to e.g. receive geographic position information from at least one satellite or cellphone tower and provide the information to the processor 24 and/or determine an altitude at which the AVDD 12 is disposed in conjunction with the processor 24. However, it is to be understood that that another suitable position receiver other than a cellphone receiver, GPS receiver and/or altimeter may be used in accordance with present principles to e.g. determine the location of the AVDD 12 in e.g. all three dimensions.
Continuing the description of the AVDD 12, in some embodiments the AVDD 12 may include one or more cameras 32 that may be, e.g., a thermal imaging camera, a digital camera such as a webcam, and/or a camera integrated into the AVDD 12 and controllable by the processor 24 to gather pictures/images and/or video in accordance with present principles. Also included on the AVDD 12 may be a Bluetooth transceiver 34 and other Near Field Communication (NFC) element 36 for communication with other devices using Bluetooth and/or NFC technology, respectively. An example NFC element can be a radio frequency identification (RFID) element.
Further still, the AVDD 12 may include one or more auxiliary sensors 37 (e.g., a motion sensor such as an accelerometer, gyroscope, cyclometer, or a magnetic sensor, an infrared (IR) sensor, an optical sensor, a speed and/or cadence sensor, a gesture sensor (e.g. for sensing gesture command), etc.) providing input to the processor 24. The AVDD 12 may include still other sensors such as e.g. one or more climate sensors 38 (e.g. barometers, humidity sensors, wind sensors, light sensors, temperature sensors, etc.) and/or one or more biometric sensors 40 providing input to the processor 24. In addition to the foregoing, it is noted that the AVDD 12 may also include an infrared (IR) transmitter and/or IR receiver and/or IR transceiver 42 such as an IR data association (IRDA) device. A battery (not shown) may be provided for powering the AVDD 12.
Still referring to
In the example shown, to illustrate present principles all three devices 12, 44, 46 are assumed to be members of an entertainment network in, e.g., in a home, or at least to be present in proximity to each other in a location such as a house. However, for illustrating present principles the first CE device 44 is assumed to be in the same room as the AVDD 12, bounded by walls illustrated by dashed lines 48.
The example non-limiting first CE device 44 may be established by any one of the above-mentioned devices, for example, a portable wireless laptop computer or notebook computer, and accordingly may have one or more of the components described below. The second CE device 46 without limitation may be established by a wireless telephone.
The first CE device 44 may include one or more displays 50 that may be touch-enabled for receiving consumer input signals via touches on the display. The first CE device 44 may include one or more speakers 52 for outputting audio in accordance with present principles, and at least one additional input device 54 such as e.g. an audio receiver/microphone for e.g. entering audible commands to the first CE device 44 to control the device 44. The example first CE device 44 may also include one or more network interfaces 56 for communication over the network 22 under control of one or more CE device processors 58. Thus, the interface 56 may be, without limitation, a Wi-Fi transceiver, which is an example of a wireless computer network interface. It is to be understood that the processor 58 controls the first CE device 44 to undertake present principles, including the other elements of the first CE device 44 described herein such as e.g. controlling the display 50 to present images thereon and receiving input therefrom. Furthermore, note the network interface 56 may be, e.g., a wired or wireless modem or router, or other appropriate interface such as, e.g., a wireless telephony transceiver, or Wi-Fi transceiver as mentioned above, etc.
In addition to the foregoing, the first CE device 44 may also include one or more input ports 60 such as, e.g., a USB port to physically connect (e.g. using a wired connection) to another CE device and/or a headphone port to connect headphones to the first CE device 44 for presentation of audio from the first CE device 44 to a consumer through the headphones. The first CE device 44 may further include one or more computer memories 62 such as disk-based or solid state storage. Also in some embodiments, the first CE device 44 can include a position or location receiver such as but not limited to a cellphone and/or GPS receiver and/or altimeter 64 that is configured to e.g. receive geographic position information from at least one satellite and/or cell tower, using triangulation, and provide the information to the CE device processor 58 and/or determine an altitude at which the first CE device 44 is disposed in conjunction with the CE device processor 58. However, it is to be understood that that another suitable position receiver other than a cellphone and/or GPS receiver and/or altimeter may be used in accordance with present principles to e.g. determine the location of the first CE device 44 in e.g. all three dimensions.
Continuing the description of the first CE device 44, in some embodiments the first CE device 44 may include one or more cameras 66 that may be, e.g., a thermal imaging camera, a digital camera such as a webcam, and/or a camera integrated into the first CE device 44 and controllable by the CE device processor 58 to gather pictures/images and/or video in accordance with present principles. Also included on the first CE device 44 may be a Bluetooth transceiver 68 and other Near Field Communication (NFC) element 70 for communication with other devices using Bluetooth and/or NFC technology, respectively. An example NFC element can be a radio frequency identification (RFID) element.
Further still, the first CE device 44 may include one or more auxiliary sensors 72 (e.g., a motion sensor such as an accelerometer, gyroscope, cyclometer, or a magnetic sensor, an infrared (IR) sensor, an optical sensor, a speed and/or cadence sensor, a gesture sensor (e.g. for sensing gesture command), etc.) providing input to the CE device processor 58. The first CE device 44 may include still other sensors such as e.g. one or more climate sensors 74 (e.g. barometers, humidity sensors, wind sensors, light sensors, temperature sensors, etc.) and/or one or more biometric sensors 76 providing input to the CE device processor 58. In addition to the foregoing, it is noted that in some embodiments the first CE device 44 may also include an infrared (IR) transmitter and/or IR receiver and/or IR transceiver 78 such as an IR data association (IRDA) device. A battery (not shown) may be provided for powering the first CE device 44.
The second CE device 46 may include some or all of the components shown for the CE device 44.
Now in reference to the afore-mentioned at least one server 80, it includes at least one server processor 82, at least one computer memory 84 such as disk-based or solid state storage, and at least one network interface 86 that, under control of the server processor 82, allows for communication with the other devices of
Accordingly, in some embodiments the server 80 may be an Internet server, and may include and perform “cloud” functions such that the devices of the system 10 may access a “cloud” environment via the server 80 in example embodiments. Or, the server 80 may be implemented by a game console or other computer in the same room as the other devices shown in
The control devices 108, 110 may be, without limitation, portable computers such as tablet computers or laptop computers (also including notebook computers) or other devices with one or more of the CE device 44 components shown in
The following description inherits the principles and components of the preceding discussion.
Now referring to
As will be explained in greater detail below, the Multiview techniques herein allow the consumer (also referred to as “customer” or “viewer” or “user”) control the “real estate” on the larger screen high resolution display. Unlike UIs that have tiles as part of a UI menu system controlled mostly by the TV or operating system, embodiments herein enable a “video wall” such that the applications or video tiles or widgets or services are displayed and shuttled around the real estate as the consumer wishes, or automatically organized based on sorting algorithms.
The digital signage aspect of Multiview is one where the retailer or advertiser can deliver “objects” to the screen and each object can be independently controlled. The object-based video, or independently run application, or notification popup bar or scrolling marquee are all ways to deliver an impact to the consumer walking by. Accordingly, flexibility is provided among these screen objects to self-adjust as new information is presented. They can also be selected for expansion to the entire screen should a consumer want full display.
Control can come from the consumer, the broadcaster/programmer or the advertiser. An advertising server may be running in the background feeding the display and as new ads present themselves, or new merchandise displayed, the real estate dynamically adjusts.
Templates, described further below, are one way of having a fixed organization of tiles made up of applications or objects. Each tile can be an object that can be independently controlled and programmed. In some embodiments the template view has divisions that can be independently controlled, and is created as a Template for the purpose of maximal use of the screen real estate. Templated views can be themed such as Sports or Cooking or Movie Templates that allow for auto content display based on a histogram of the consumers viewing selections over time.
Multiview can be made up of individual IP video feeds or just one monolithic IP feed with each decimated video aggregated into one template. In example implementations, the template knows how the videos and objects and applications (tiles) making up the entire template have been arranged so that the user can signal the broadcaster what video to remove or add in an interactive IP video session. In a televised broadcast template, the national feeds can be selected and set based on the supporting templates, whereas in an IP video streaming session the tiles that make up a single IP feed can be controlled by the consumer and broadcaster to satisfy the targeted viewing preferences of a single household or viewer.
Display real estate thus may be segmented into tiles or objects each of which is assigned metadata of the full view. Upon selecting options for each tile, a greater range of metadata and options are available for each tile. Metadata options can be in the form of selectable applications, selectable views, or selectable carrousels of content as discussed further below. Each tile may be individually managed and controlled and easily reset with updated content in a variety of ways.
As focused is placed on a particular tile by highlighting it or surrounding it with a lighted bar, metadata searches can be delivered also into other tiles for the purpose of linking. Linking allows a consumer to highlight a video and then have the supporting metadata displayed in a tile next to the video. Tile linking is a way to search for further information about content delivered in one tile, and then display it in another adjacent tile for the purpose of managing the real estate and allowing for continued live updates from the video to be displayed in the adjacent tile during the viewing. This type of linking allows for one type of object to be separated but linked to another type of object, e.g., a video linked to a metadata concurrent display. Automatic linking can occur when an operator delivers a single template and each tile has a relationship to each other for the purposes of curating the entire video experience.
In the example shown, the tiles 302 in the top row of the screen shot of
Some of the tiles 302 are established by a consumer designating the underlying asset as a “favorite”, and hence the screen shot of tiles in
Thus, as represented in
As indicated in
In an example, the state of a video asset is automatically recorded when viewing of the video is discontinued. The information recorded may include the identification of the device 300 such as a network address thereof, as well as the location in the video stream that was last presented on the device 300. Various techniques can be used to know the last-viewed location, including, for example, knowing the location in a video bitstream at which viewing of the video was discontinued using Remote Viewing (RVU) technology. Or, the length of time the video was viewed and recorded. Yet again, automatic content recognition (ACR) can be used on a snapshot of the last-viewed frame and used as entering argument to a database of video to identify the location in the video at which viewing was discontinued.
Thus, favorite assets are managed in a way that allows the consumer to scroll through a stream of tiles to see what the favorites are in real time as the display is viewed. Much like a carousel, the viewer can retain the “state” of the individual content source within each tile so that upon subsequent re-selection, it returns back to exactly where the consumer left off. Instead of showing the content from the beginning, or updated webpage, the strip or carousel takes the last snapshot of the asset state and displays it for reference.
In an example, the display executes a hypertext markup language (HTML)-5 application to store the content state of each tiled asset as an extensible markup language (XML) file that has the commands and state information ready to be accessed.
Remembering the exact state of content being viewed and the ability to go right to that point of viewing enables a consumer to move from device to device and maintain the exact viewing configuration that was in effect when one device was abandoned and another one accessed.
Turning now to
At block 600, the device 700 of
Moving to block 602, the device 700 automatically determines additional potential favorites based on consumer behavior in operating the device 700. For example, any of the techniques mentioned previously may be used. Also, as another example, if the consumer watches sports channels for an unusually long period of time, the identifications of other sports channels or video sources may be automatically added to the favorites data structure.
In addition, at block 604 additional identifications of other potential favorite video assets may be added to the favorites data structure based on consumer-input preferences. For example, a user interface (UI) can be presented on the device 700 prompting the consumer to enter preferred actors, or video genres, etc. and those preferences are then used as entering arguments to a database of video metadata to retrieve the identifications of video assets most closely satisfying the consumer-input preferences.
The tiles 702 in
However, if desired the decimation of the video may be executed by the device 700 on undecimated full video received from the sources of the favorites. When decimated by the broadcaster or other source, the device 700 can transmit a message to the broadcaster or other source that the favorites UI of
At block 608, the consumer may, with the aid of a point and click device or other input means (e.g., touch screen input), drag and drop or otherwise move the tiles 702 in the presentation of the UI of
Accordingly, each tile 702 in the mosaic of tiles of
Diverse content sources are thus aggregated into a single view on a display device and the UI may be at least partially auto populated with these selected content sources. Selecting content from various Internet sources, cable sources, HDMI sources can be then aggregated into a customized display.
Templates such as the example shown in
As further contemplated herein, the device 700 may be employed according to above principles in hospitality establishments or bars or for digital signage as a way to deliver product videos that show how the product is being used or fashion video demonstrations of clothing, etc. Moreover, medical uses with different camera angles of an operation populating the tiles 702 are envisioned. The videos can be related to each other.
Turning now to
A consumer selection of a desired layout is received from the user device at block 802. The consumer selection is sent in the form of metadata to the service provider, including the name or network address of a desired channel, web feed, etc. A consumer need only click on a selection as described further below, and the AVDD automatically extracts the relevant metadata from the selected asset and sends it to the service provider.
At block 804 the service provider populates the template with content types indicated by the template. Thus, each tile of the template is associated with an underlying content of the template type. Each tile may be visually represented by a still or video image selected from the underlying asset. The consumer may also indicate specific content sources, e.g., specific sports TV channels or web feeds for a “sports” template, and those selected sources are used to populate the template.
Moving to block 806, the service provider sends the populated template to the user device as a single file or feed. The consumer may employ a point and click device such as a TV remote control to select a tile on the template, which is received at block 808. At block 810, the underlying asset represented by one or more respective tiles may be changed such that the tile is associated with a first asset at a first time and a second asset at a second time.
Thus, when the tiles are implemented by respective video feeds (which may be decimated by the service provider or the receiving AVDD as described previously), the consumer can watch multiple video events simultaneously as a single feed provided by the broadcaster or other service provider. The consumer selects optional display templates offered by the service provider for automatic content arrangement and display in an organized curated manner. The content is delivered by the broadcaster or other service provider and fills in the template that is chosen by the consumer. For a Multiview TV experience, the feeds can be Internet Protocol (IP) feeds and can be selected by the content distributor or the end customer as mentioned above. Each template identifies the type of content that is delivered into each portion or tile.
For example, the service provider can create a customized sports view 900 as shown in
As shown in
Each channel that populates a template for viewing in multicast format can be associated with an indicator in the program guide sent as metadata to the AVDD. For instance, as shown in
The effective use of larger screen AVDDs with improved display resolution allows for splitting the canvas into multiple parts that can be delivered as a single video feed or HTML5 application. Each broadcaster or content source or service provider can leverage this system for delivering a package of content not just one video or one guide or one web site.
It may now be appreciated that in the example of
Using Web standards and HTML5 applications, the service provider can deliver these custom templates, as long as the hardware platform supports the multiple decoding requirements of the video. The purpose of this application or template is to signal to the content source or service provider how the consumer wants the video to be delivered.
Attention is now directed to
Commencing at block 1300 in
As the consumer scrolls through the tiles, one of the tiles moves into focus at block 1302, typically into the central portion of the tiled view. As a tile takes focus as shown at 1400 in
As shown in, e.g.,
In
The principles of
The Z-plane concept can also be used behind a tile that is in the X-Y plane. This concept is similar to the carousel that can be scrolled on the canvas to identify content. In this particular implementation the Z-plane tile exists behind the tile being highlighted or visible within a template. Though it cannot be seen, in place scrolling can be executed in which the next Z-plane choice comes to the front and replaces the tile currently visible. This “in place” tile scrolling is particularly efficient if the viewer is familiar with what is in the carousel associated with a particular tile or content source. It is fast in that the template is not swapped out for a carousel to view in the canvas, or where the template is minimized during search or set back but remains fully in place. The viewer in tile scroll mode simply clicks on that tile repeatedly to have the contents replaced with similar themed content assigned to that tile. In effect there would be a virtual carousel assigned to each tile in the Z-plane behind the tile.
Each section or tile that represents a discrete piece of content can be managed using an HTML5 canvas element that is controllable with specific WebGL based APIs. The AVDD manufacturer can also build a custom application for menuing that runs faster and utilizes Open GL ES or other graphics acceleration techniques to deliver a fast scrolling feed and to send individual commands to each tile for control. When each tile is highlighted or made active, a series of commands can be executed using XML or JavaScript to program the tile for the desired function.
The embodiment of
Moving to block 1902, the content type for each to be presented on the display in a tiled view is received. The selection of content type may be by the consumer associated with the AVDD according to principles described elsewhere herein.
At block 1904, a minimum size configuration (also referred to as “aspect ratios”) of each tile is defined or established based on the display type and the type of content selected for that tile. This may be done by using a lookup table constructed by the manufacturer of the AVDD or the service provider that ensures that depending on the type of asset underlying the tile, the tile will be large enough to promote easy discernment from the average (or in some cases visually impaired) viewer.
For example, the table below illustrates:
Proceeding to block 1906, the tiles may be populated by associating them with specific underlying assets of the content type defined for the tile. Moving to block 1908, in some implementations the consumer may be given the option of resizing one or more tiles by, e.g., dragging and dropping a corner or edge of a tile outward or inward to expand or contract the tile size, with the other tiles being automatically expanded or contracted accordingly. Or, if a touch screen display is used, pinches in or out on a tile can be used to contract or expand the tile.
Block 1910 indicates that as tiles are automatically re-sized, the minimum ARs for each tile (which recall is based on the type of display and type of content assigned to the tile) act as limits in resizing. Thus, for instance, no tile may be resized to less than its minimum AR. This limit can also apply to consumer-input tile contractions, in which a tile cannot be contracted to be smaller than its minimum AR regardless of consumer input attempting to do so. If a tile cannot be automatically resized to both conform to its minimum AR and remain on the display canvas, it can be removed, as a default.
Also, the consumer may be allowed at block 1912 to designate which tile is to be used as a “staging area” to receive newly selected content for presentation in a Multiview. Proceeding to block 1914, consumer-selected content is moved from the staging area to a new tile per the tile's pre-defined content type (if it matches the type of selected content) or per other consumer indication if desired.
Note that content types may include not only TV video and other video but also calendars, web spaces, etc., and that this comment applies to all embodiments herein unless otherwise indicated. Thus, in the example shown in
Thus, the size or aspect ratio for each tile described above can be set by the consumer and be fixed in place, or it can be dynamically adjusted or resized to fit additional content or information being sent to the AVDD display. The AVDD adjusts to place the sections of content together and utilize the available space, based on template parameters set by the AVDD manufacturer, service provider, or consumer. Fixed templates or designed templates are comprised of various tiles or sections and can be mapped to different types of content sources, such as videos, webpages, photos, graphical data streams to make up a custom view. These templates or custom views can be saved or stored and also assigned permanent content sources that send the latest feed or information to each tile persistently. This creates unique views that can be preserved and stored for future retrieval. Saved themes or templates with their corresponding metadata can be shared by the consumer, and sent to other networked AVDDs for viewing by friends or family.
Because the tiles are constrained by the resolution of the AVDD and the size of the AVDD display, there is a limit as to how much content can be reasonably displayed at once on the AVDD screen. As described above, an algorithm incorporating visual limits for each type of content specifies minimum aspect ratios for video, audio placards, web pages, pictures, graphical representations to serve as reference points for dynamic display adjustment when new content is added. This prevents sections from being too small to read or not large enough for acceptable video quality. Also, fixed templates have already been assigned aspect ratios and sized for the individual tiles comprising the template. Templated tiles can also be given priority such that one tile receives new content first as a staging area. This prioritization of tiles enables tablets or phones to fling or send content to the TV screen which can then target an individual tile by order of priority. A particular tile then becomes preset for that particular type of content.
As understood herein, large HD and UHD displays can function akin to a billboard of daily activity that can track many aspects of home life or activity. And that means various tiles or sections will need to be dynamically updated and resized as the display's information changes over the course of a day. Also, different times of the day will allow for different themed templates which can be preconfigured to pull content or be sent content.
HTML5 web applications that utilize Web GL, JavaScript, XML, MSE (Media Source Extensions) and EME (Encrypted Media Extensions) are a flexible way to implement present principles.
An AVDD 2100 in
As shown in
The advertisements can display graphics, video and data in a concise format and can be either superimposed on the video in the program window 2106, or the video in the program window 2106 may be slightly decimated and the popup bar 2104 placed in the resulting empty display space, since the size of the popup bar 2104 is known to both the broadcaster and other affected concerns. The AVDD 2100 can automatically mute the audio of the program in the window 2106 and then playback the advertisement automatically. Each advertisement may be of a standard configuration so that the broadcasters know how much space to allocate for each form of content: video, graphics, and text. In any case, the popup bar 2104 is at least partially controlled or activated by the broadcaster, thus allowing the program provider to decide advertisement breaks or presentations.
If a premium level of service is being provided to the consumer via the AVDD 2100 at diamond 2400, received broadcast content may be presented, undecimated, on the entire canvas of the AVDD 2100. Presentation of the popup bar 2104 is blocked, such that uninterrupted viewing of an advertising-free broadcast content is afforded in the premium level of service.
On the other hand, if a standard level of service is being provided at diamond 2404, the above-described advertisements may be presented in the popup bar 2104 at block 2406 simultaneously with presenting the broadcast content in the window 2106, again without interrupting the broadcast program by the nuisance of advertisements embedded in the program. However, if the lowest level of service is provided, conventional programs including embedded advertising that interrupts the program may be provided to the AVDD at block 2408. The level of service used may be established by the consumer by, e.g., making appropriate low, medium, and high payments to the MSO or other content provider.
Essentially, an application residing on physical media allows for multiple sources of content to be displayed in a synchronized fashion with the video playing in to a single AVDD. The use of templated views allows for each video or content source to be independently controlled and metadata to be assigned to each source. The sources are treated as objects within the entire video canvas of the AVDD screen and are assigned to a single tile or segment of the screen by use of the templated controls. The template controls determine the aspect ratio and size of the segments which show content based on the type of content, consumer preferences for size, and how the content fits together on the larger AVDD canvas. Each segment within the larger AVDD canvas can be resized dynamically as new content is added.
The physical media does not have to be running an application that creates the template, as this template normally would reside in the AVDD firmware and pull streams from the physical media. However, the physical media equally could have an HTMLS application designed specifically by the content creators for displaying their curated content.
At block 2602, the templated content streams coming from the application may be wrapped in an HTML5 and Javascript web application that accesses all of the various content streams from the various media and organizes them in the same manner as the AVDD otherwise would. Thus, the functionality can reside on the AVDD or any other playback device or even the physical media 2502, 2504, 2506. The only difference is whether the application is embedded in the AVDD, the external player, or within the physical media application. Regardless of where the template application resides, it is run at block 2604 to present the templated view shown in
The playback of content from digital media located on physical media, or a physical player's hard drive, that is then combined or integrated with other media for a multiview experience that allows additional experiences to be built around the primary experience of watching the video, such as web based curated content that synchronizes with the media or social experiences designed for the purpose of sharing media experiences.
If desired, user-selected additional content may be received at block 2608 for presentation along with the tiles shown in
Multiview as an Application for Physical Digital Media allows multiview experiences to become portable to other playback devices and displays and allows content to be staged for viewing such as at an event, or in a store, or in a lobby of a hotel, or in a stadium for digital signage or hospitality or for a consumer to carry the application with them when they travel.
In
While the particular MULTIVIEW AS AN APPLICATION FOR PHYSICAL DIGITAL MEDIA is herein shown and described in detail, it is to be understood that the subject matter which is encompassed by the present invention is limited only by the claims.
Number | Name | Date | Kind |
---|---|---|---|
5655112 | MacInnis | Aug 1997 | A |
6020930 | Legrand | Feb 2000 | A |
6061056 | Menard et al. | May 2000 | A |
6615408 | Kaiser et al. | Sep 2003 | B1 |
6662177 | Martino et al. | Dec 2003 | B1 |
7103904 | Blackketter et al. | Sep 2006 | B1 |
7197708 | Frendo et al. | Mar 2007 | B1 |
7480872 | Ubillos | Jan 2009 | B1 |
8060908 | Bountour et al. | Nov 2011 | B2 |
8244829 | Tanaka et al. | Aug 2012 | B2 |
8306522 | Delker et al. | Nov 2012 | B1 |
8312486 | Briggs et al. | Nov 2012 | B1 |
8316394 | Yates | Nov 2012 | B2 |
8332889 | Calzone | Dec 2012 | B2 |
8370874 | Chang et al. | Feb 2013 | B1 |
8473984 | Harmon et al. | Jun 2013 | B1 |
RE44554 | Utsumi et al. | Oct 2013 | E |
8683519 | McCarthy et al. | Mar 2014 | B2 |
8752206 | Joseph et al. | Jun 2014 | B2 |
8832738 | Shanks et al. | Sep 2014 | B2 |
8918411 | Latif et al. | Dec 2014 | B1 |
8925024 | Wright et al. | Dec 2014 | B2 |
9462028 | Levinson et al. | Oct 2016 | B1 |
9733809 | Greene et al. | Aug 2017 | B1 |
9894404 | Richman et al. | Feb 2018 | B2 |
9906751 | Park et al. | Feb 2018 | B2 |
10009658 | Richman et al. | Jun 2018 | B2 |
20020165770 | Khoo et al. | Nov 2002 | A1 |
20020184339 | Mackintosh et al. | Dec 2002 | A1 |
20030020671 | Santoro et al. | Jan 2003 | A1 |
20030174160 | Deutscher et al. | Sep 2003 | A1 |
20040078814 | Allen | Apr 2004 | A1 |
20040128317 | Sull et al. | Jul 2004 | A1 |
20040263686 | Kim | Dec 2004 | A1 |
20060095950 | Coonce et al. | May 2006 | A1 |
20060123448 | Ma et al. | Jun 2006 | A1 |
20070014536 | Hellman | Jan 2007 | A1 |
20070038570 | Halbritter et al. | Feb 2007 | A1 |
20070107010 | Jolna et al. | May 2007 | A1 |
20070182809 | Yarid et al. | Aug 2007 | A1 |
20070283008 | Bucher et al. | Dec 2007 | A1 |
20080046942 | Merlin | Feb 2008 | A1 |
20080163059 | Craner | Jul 2008 | A1 |
20080295037 | Cao et al. | Nov 2008 | A1 |
20090083824 | McCarthy et al. | Mar 2009 | A1 |
20090132682 | Counterman | May 2009 | A1 |
20090204929 | Baurmann et al. | Aug 2009 | A1 |
20090217336 | Chang et al. | Aug 2009 | A1 |
20090219442 | Hironaka et al. | Sep 2009 | A1 |
20090228943 | Ramaswamy et al. | Sep 2009 | A1 |
20100005499 | Covey | Jan 2010 | A1 |
20100026809 | Curry | Feb 2010 | A1 |
20100027966 | Harrang et al. | Feb 2010 | A1 |
20100064313 | Beyabani | Mar 2010 | A1 |
20100153885 | Yates | Jun 2010 | A1 |
20100153999 | Yates | Jun 2010 | A1 |
20100157157 | Yi | Jun 2010 | A1 |
20110055762 | Jung et al. | Mar 2011 | A1 |
20110113336 | Gunatilake | May 2011 | A1 |
20110154405 | Isaias | Jun 2011 | A1 |
20110296312 | Boyer et al. | Dec 2011 | A1 |
20120023524 | Suk et al. | Jan 2012 | A1 |
20120066602 | Chai et al. | Mar 2012 | A1 |
20120072843 | Durham et al. | Mar 2012 | A1 |
20120173981 | Day | Jul 2012 | A1 |
20120179833 | Kenrick et al. | Jul 2012 | A1 |
20120212668 | Schultz et al. | Aug 2012 | A1 |
20120271970 | Mason | Oct 2012 | A1 |
20120278725 | Gordon et al. | Nov 2012 | A1 |
20120284745 | Strong | Nov 2012 | A1 |
20130060969 | Ylikoski et al. | Mar 2013 | A1 |
20130106690 | Lim | May 2013 | A1 |
20130125050 | Goshey | May 2013 | A1 |
20130166580 | Maharajh et al. | Jun 2013 | A1 |
20130185642 | Gammons | Jul 2013 | A1 |
20130188095 | Hartson et al. | Jul 2013 | A1 |
20130188097 | Smith | Jul 2013 | A1 |
20130194296 | Lee | Aug 2013 | A1 |
20130198609 | Mokhtarzada et al. | Aug 2013 | A1 |
20130232148 | Macdonald et al. | Sep 2013 | A1 |
20130278828 | Todd | Oct 2013 | A1 |
20140006951 | Hunter | Jan 2014 | A1 |
20140040742 | Park et al. | Feb 2014 | A1 |
20140059605 | Sirpal et al. | Feb 2014 | A1 |
20140068676 | Lin et al. | Mar 2014 | A1 |
20140072270 | Goldberg et al. | Mar 2014 | A1 |
20140082661 | Krahnstoever et al. | Mar 2014 | A1 |
20140101700 | Sheeley | Apr 2014 | A1 |
20140253801 | Richman et al. | Sep 2014 | A1 |
20140253802 | Clift et al. | Sep 2014 | A1 |
20140337791 | Agnetta et al. | Nov 2014 | A1 |
20150020102 | Yoo et al. | Jan 2015 | A1 |
20150040162 | Kotecha et al. | Feb 2015 | A1 |
20150074721 | Fishman et al. | Mar 2015 | A1 |
20150078179 | Lui et al. | Mar 2015 | A1 |
20150110457 | Abecassis et al. | Apr 2015 | A1 |
20150143069 | Elsloo | May 2015 | A1 |
20150187324 | Kim et al. | Jul 2015 | A1 |
20150212664 | Freer | Jul 2015 | A1 |
20150213776 | Sharma et al. | Jul 2015 | A1 |
20150297311 | Tesar | Oct 2015 | A1 |
20150310896 | Bredow et al. | Oct 2015 | A1 |
20150319473 | Farkash et al. | Nov 2015 | A1 |
20150324947 | Winograd et al. | Nov 2015 | A1 |
20150356195 | Kilzer | Dec 2015 | A1 |
20160011743 | Fundament | Jan 2016 | A1 |
20160034159 | Vranjes et al. | Feb 2016 | A1 |
20160110901 | Connolly et al. | Apr 2016 | A1 |
20160112752 | Selvaraj | Apr 2016 | A1 |
20160132217 | Asokan et al. | May 2016 | A1 |
20160142783 | Bagga et al. | May 2016 | A1 |
20170048579 | Taxier et al. | Feb 2017 | A1 |
20170070786 | Keene et al. | Mar 2017 | A1 |
20170139548 | Heras et al. | May 2017 | A1 |
20170164058 | Navarro | Jun 2017 | A1 |
20180136789 | Faaborg et al. | May 2018 | A1 |
Number | Date | Country |
---|---|---|
101031069 | Sep 2007 | CN |
102404627 | Apr 2012 | CN |
2487898 | Aug 2012 | EP |
233716 | Apr 2006 | IN |
2004003693 | Jan 2004 | WO |
2015031802 | Mar 2015 | WO |
Entry |
---|
Activevideo, “Case Study: Liberty Puerto Rico Personalized mosaic for social TV navigation on existing STBs”, published Mar. 31, 2015. |
Amazon, “Design and User Experience Guidelines—Amazon Fire TV”, https://developer.amazon.com/public/solutions/devices/fire-tv/docs/design-and-user-experience-guidelines, printed from website Apr. 7, 2016. |
AT&T, “AT&T U-verse TV Customers Can Choose and Watch Their Favorite Channels Simultaneously with New Exclusive “My Multiview” App”, Published Jun. 30, 2010. |
Directv, “DirecTV Mix Channels”, published Dec. 20, 2013. |
Engadget, “Cablevision Lets Viewers Watch 9 Channels At Once With New iO TV”, Published Mar. 24, 2011. |
Han Hu, Jian Huang, He Zhao, Yonggang Wen, Chang Wen Chen, Tat-Seng Chua, “Social TV Analytics: A Novel Paradigm to Transform TV Watching Experience”, Published Mar. 2014; https://www.researchgate.net/publication/261959898_Social_TV_analystics_A_novel_paradign_to_transform_TV_watching_experience. |
Jinsoo Han, Intark Han, Kwang-Roh Park, “User Configurable Personalized Mosaic Electronic Program Guide”, Published Feb. 2008. |
Skreens Entertainment Technologies, “SkreensTV wants to turn your TV into a muti-screen content-streaming machine”, https://pando.com/2014/11/13/skreenstv-wants-to-turn-your-tv-into-a-multi-screen-content-streaming-machine/. Published Nov. 13, 2014. |
Number | Date | Country | |
---|---|---|---|
20220256228 A1 | Aug 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16292940 | Mar 2019 | US |
Child | 17733449 | US | |
Parent | 15070447 | Mar 2016 | US |
Child | 16292940 | US |