The present application relates generally to presenting audio video (AV) content on audio video output devices, with at least one of the devices configured to present the AV content in a format optimized for observance by a person with a sensory impairment.
It is often easier for the audibly and/or visually impaired to observe audio video (AV) content in a format tailored to their impairment to make the AV content more perceptible to them given their impairment. However, present principles recognize that two or more people may wish to simultaneously view the same content in the same room (e.g. in each other's presence) for a shared viewing experience, but only one person may have a hearing, visual, and/or cognitive impairment while the other may wish to view the AV content in its “normal,” non- impaired format.
Accordingly, in a first aspect, an apparatus includes at least one processor and at least one computer readable storage medium that is not a carrier wave. The computer readable storage medium is accessible to the processor and bears instructions which when executed by the processor cause the processor to receive input representing visual or audible capabilities of a first person and at least in part based on the input, configure at least a first setting on a first audio video output device. The instructions also cause the processor to present a first audio video presentation on the first audio video output device in accordance with the first setting and concurrently with presenting the first audio video presentation on the first audio video output device, present the first audio video presentation on a companion audio video output device located in a common space with the first audio video output device.
Furthermore, in some embodiments the instructions when executed by the processor cause the processor to receive second input representing visual or audible capabilities of a second person and at least in part based on the second input, configure at least a second setting on the companion audio video output device. The instructions thus may also cause the processor to present the first audio video presentation on the companion audio video output device in accordance with the second setting. In some embodiments, the first setting may be an audio setting and/or a visual display setting.
If the first setting is a visual display setting, if desired it may be a first color blind setting while the second setting may be configured for presenting video from the first audio video presentation in a configuration not optimized for the visually impaired. Also in some embodiments, if the first setting is a visual display setting then it may be a first color blind setting while the second setting may be a second color blind different from the first color blind setting, where both the first and second color blind settings are thus configured for presentation of video from the first audio video presentation in configurations optimized for different visual capabilities.
Further still, in some embodiments the first setting may be a setting for closed captioning, and/or may be a visual display setting for magnifying images presented on the first audio video output device. Thus, e.g., at least one person included in at least one image presented on the first audio video output device may be magnified. Moreover, as indicated above, the first setting may be an audio display setting, and in such instances may pertain to volume output on the first audio video output device, audio pitch, and/or frequency.
In another aspect, a method includes providing audio video (AV) content to at least two AV display devices, where the AV content is configured for presentation on a first AV display device according to a first setting configured to optimize the AV content for observance by a person with a sensory impairment. The method also includes synchronizing presentation of the AV content on the first AV display device and a second AV display device, where presentation of the AV content is synchronized such that at least similar video portions of the AV content are presented on the first and second AV display devices at or around the same time.
In still another aspect, a computer readable storage medium bears instructions which when executed by a processor of a consumer electronics (CE) device configure the processor to execute logic including presenting at least video content on separate display devices concurrently, where the video content is presented on at least a first of the display devices in a first format not optimized for observance by the sensory impaired and the video content is presented on at least a second of the display devices in a second format optimized for observance by the sensory impaired.
In still another aspect, a computer readable storage medium bears instructions which when executed by a processor of a consumer electronics (CE) display device configure the processor to execute logic including providing at least video content on separate display devices concurrently, where the video content is presented on at least a first display device in a first format not optimized for observance by the sensory impaired and sending the video content from the first display device to a second display device for presentation thereon in a second format optimized for observance by the sensory impaired.
The details of the present invention, both as to its structure and operation, can best be understood in reference to the accompanying drawings, in which like reference numerals refer to like parts, and in which:
This disclosure relates generally to consumer electronics (CE) device based user information. With respect to any computer systems discussed herein, a system herein may include server and client components, connected over a network such that data may be exchanged between the client and server components. The client components may include one or more computing devices including portable televisions (e.g. smart TVs, Internet-enabled TVs), portable computers such as laptops and tablet computers, and other mobile devices including smart phones and additional examples discussed below. These client devices may employ, as non-limiting examples, operating systems from Apple, Google, or Microsoft. A Unix operating system may be used. These operating systems can execute one or more browsers such as a browser made by Microsoft or Google or Mozilla or other browser program that can access web applications hosted by the Internet servers over a network such as the Internet, a local intranet, or a virtual private network.
As used herein, instructions refer to computer-implemented steps for processing information in the system. Instructions can be implemented in software, firmware or hardware; hence, illustrative components, blocks, modules, circuits, and steps are set forth in terms of their functionality.
A processor may be any conventional general purpose single- or multi-chip processor that can execute logic by means of various lines such as address lines, data lines, and control lines and registers and shift registers. Moreover, any logical blocks, modules, and circuits described herein can be implemented or performed, in addition to a general purpose processor, in or by a digital signal processor (DSP), a field programmable gate array (FPGA) or other programmable logic device such as an application specific integrated circuit (ASIC), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor can be implemented by a controller or state machine or a combination of computing devices.
Any software modules described by way of flow charts and/or user interfaces herein can include various sub-routines, procedures, etc. It is to be understood that logic divulged as being executed by a module can be redistributed to other software modules and/or combined together in a single module arid/ or made available in a shareable library.
Logic when implemented in software, can be written in an appropriate language such as but not limited to C# or C++, and can be stored on or transmitted through a computer-readable storage medium such as a random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), compact disk read-only memory (CD-ROM) or other optical disk storage such as digital versatile disc (DVD), magnetic disk storage or other magnetic storage devices including removable thumb drives, etc. A connection may establish a computer-readable medium. Such connections can include, as examples, hard-wired cables including fiber optics and coaxial wires and digital subscriber line (DSL) and twisted pair wires. Such connections may include wireless communication connections including infrared and radio.
In an example, a processor can access information over its input lines from data storage, such as the computer readable storage medium, and/or the processor accesses information wirelessly from an Internet server by activating a wireless transceiver to send and receive data. Data typically is converted from analog signals to digital and then to binary by circuitry between the antenna and the registers of the processor when being received and from binary to digital to analog when being transmitted. The processor then processes the data through its shift registers to output calculated data on output lines, for presentation of the calculated data on the CE device.
Components included in one embodiment can be used in other embodiments in any appropriate combination. For example, any of the various components described herein and/or depicted in the Figures may be combined, interchanged or excluded from other embodiments.
“A system having at least one of A, B, and C” (likewise “a system having at least one of A, B, or C” and “a system having at least one of A, B, C”) includes systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.
Now referring specifically to
Describing the first CE device 12 with more specificity, it includes a touch-enabled display 20, one or more speakers 22 for outputting audio in accordance with present principles, and at least one additional input device 24 such as, e.g., an audio receiver/microphone for e.g. entering commands to the CE device 12 to control the CE device 12. The CE device 12 also includes a network interface 26 for communication over at least one network 28 such as the Internet, an WAN, an LAN, etc. under control of a processor 30, it being understood that the processor 30 controls the CE device 12 including presentation of AV content configured for the sensory impaired in accordance with present principles. Furthermore, the network interface 26 may be, e.g., a wired or wireless modern or router, or other appropriate interface such as, e.g., a Wi-Fi, Bluetooth, Ethernet or wireless telephony transceiver. In addition, the CE device 12 includes an input port 32 such as, e.g., a USB port, and a tangible computer readable storage medium 34 such as disk-based or solid state storage. In some embodiments, the CE device 12 may also include a GPS receiver 36 that is configured to receive geographic position information from at least one satellite and provide the information to the processor 30, though it is to be understood that another suitable position receiver other than a GPS receiver may be used in accordance with present principles.
Note that the CE device 12 also includes a camera 14 that may be, e.g., a thermal imaging camera, a digital camera such as a webcam, and/or camera integrated into the CE device 12 and controllable by the processor 30 to gather pictures/images and/or video of viewers/users of the CE device 12. As alluded to above, the CE device 12 may be e.g. a laptop computer, a desktop computer, a tablet computer, a mobile telephone, an Internet-enabled and/or touch-enabled computerized (e.g. “smart”) telephone, a PDA, a video player, a smart watch, a music player, etc.
Continuing the description of
The CE device 16 further includes a tangible computer readable storage medium 50 such as disk-based or solid state storage, as well as a TV tuner 52. In some embodiments, the CE device 16 may also include a GPS receiver (though not shown) similar to the GPS receiver 36 in in function and configuration. Note that a camera 56 is also shown and may be, e.g., a thermal imaging camera, a digital camera such as a webcam, and/or camera integrated into the CE device 16 and controllable by the processor 46 to gather pictures/images and/or video of viewers/users of the CE device 16, among other things.
In addition to the foregoing, the CE device 16 also has a transmitter/receiver 58 for communicating with a remote commander (RC) 60 associated with the CE device 16 and configured to provide input (e.g., commands) to the CE device 16 to control the CE device 16. Accordingly, the RC 60 also has a transmitter/receiver 62 for communicating with the CE device 16 through the transmitter/receiver 58. The RC 60 also includes an input device 64 such as a keypad or touch screen display, as well as a processor 66 for controlling the RC 60 and a tangible computer readable storage medium 68 such as disk-based or solid state storage. Though not shown, in some embodiments the RC 60 may also include a touch-enabled display screen, a camera such as one of the cameras listed above, and a microphone that may all be used for providing commands to the CE device 16 in accordance with present principles. E.g., a user may configure a setting (e.g. at the CE device 16) to present AV content to be presented thereon in a format configured for observation by a person with one or more sensory impairment.
Still in reference to
Turning now to
Optionally, at block 82 the logic may configure at second CE device's settings as well. Thus, e.g., should the processor executing the logic of
In any case, after block 82 the logic proceeds to block 84 where the logic receives or otherwise accesses at least one copy, instance, or version of the AV content in accordance with present principles (e.g., a version unaltered for a sensory impairment). Thereafter the logic moves to block 86 where the logic manipulates one or more copies, instances, and/or versions of the AV content to conform to the one or more sensory impaired settings as indicated at block 80. For example, the logic may e.g. daltonize the AV content to make it more perceptible to a person with partial color-blindness. After manipulating at least one of the copies, instances, or versions of the accessed AV content, the logic proceeds to block 88 where the logic receives and/or determines at least one timing parameter to be utilized by the logic to enable and/or configure the CE devices for simultaneous presentation of the AV content (e.g., one version of the AV content that has been daltonized and will be presented on one CE device may be presented simultaneously or near-simultaneously with another version of the same AV content that has not been daltonized and will be presented on another CE device based on the timing parameter to create a shared-viewing experience). Despite the foregoing, note that in some embodiments the same version, copy, or instance of the AV content may be presentable on each respective CE device, e.g. streamed via multicast Internet Protocol (IP) or IEEE 1394 packets, where the CE device itself manipulates the AV content according to a sensory impairment setting. In other words, the foregoing disclosure of two versions of the same AV content being used is meant to be exemplary and also that the same (e.g. “original” or “normal”) AV content version may be provided to multiple CE devices which is then optimized thereat for one or more sensory impairment in accordance with present principles.
Also in some other embodiments, the first display device (e.g., a TV) forwards the content to the second display device (e.g. a tablet). Thus, the content may be sent from the server to the first device, and the companion display is then “slaved” off of the first device. Control used to trick play content with the first device will also cause content to be trick played on the second device. The first device may process the content for the second device or the content may be streamed with the second display processing the content according to the sensory impairment.
Regardless and describing the timing parameter determined at block 88, the timing parameter that is used to determine e.g. when to provide an AV content stream to two CE devices for simultaneous presentation of the same portions of the AV content at the same time thereon but with different sensory impairment configurations (e.g., minute one, second fifty two of the AV content is presented on both CE devices at the same time) may be based on e.g. residential Wi-Fi network conditions over which the AV content will be provided to the CE devices (such as available bandwidth) and/or any wired connection speed differences such as one Wi-Fi connection for one device and one HDMI connection for another.
In any case, after block 88 at which the one or more timing parameters are determined, the exemplary logic concludes at block 90 where the logic presents and/or configures the CE devices to present the AV content in accordance with the one or more sensory impairment settings according to the at least one timing parameter so that the AV content is presented concurrently on both of the CE devices. Thus, e.g. a shared-viewing experience is created where a person without a sensory impairment may be able to observe the AV content on one of the CE devices in its unaltered form for that user's optimal viewing, while a sensory-impaired person may observe the AV content on another of the CE devices in e.g. daltonized form for the sensory-impaired user's optimal viewing, but in either case both viewers observe the same portion of the AV content concurrently in the same room using two CE devices as if they were both observing the AV content on a single CE device.
Continuing the detailed description in reference to
The logic then moves to block 96 where it provides e.g. two versions of the same underlying AV content to the CE devices, set top box, etc. such that the two versions of the AV content may be presented simultaneously or near-simultaneously. Notwithstanding the exemplary logic described in reference to
Now in reference to
Thus, relative to the AV content as presented on the CE device 104, the content on the CE device 102 in the present exemplary instance is daltonized as may be appreciated from the differing shading of a cloud 108 in the sky to symbolize on the black and white figure that the color presentation of the cloud as presented on the CE device 102 is not the same as it is presented on the CE device 104. As may also be appreciated from
Moving from
The UI 120 of
Concluding the description of
Moving to
Now in reference to
Thus, as one specific example, selection of the selector element 170 may automatically without further user input cause the AV content associated therewith to automatically be presented on two CE devices (e.g. identified as being in proximity to each other, to the set top box, and/or in the same room) that have had their respective CE device sensory impairment settings configured prior to selection of the element 170. Thus, the AV content may be seamlessly presented on two devices responsive to selection of the selector element 170. If, however, the CE devices have not had their respective CE device sensory impairment settings configured prior to selection of the element 170 (or alternatively to automatically presenting the content even if they have had their respective CE device sensory impairment settings configured prior to selection of the element 170), then a settings UI such as the UI 120 may be presented to configure one or more of the CE devices in accordance with present principles.
Still in reference to the UI 160 of
Continuing the description of
Moving to
As may be appreciated from
In addition to the foregoing, the UI 180 also includes another prompt 186 prompting a user regarding whether to set impairment settings for the device presenting the UI 180 and/or the other detected device. The prompt 186 thus includes yes and no options that are selectable using the respective radio buttons associated therewith to provide input to the CE device presenting the UI 180 for whether or not to configure settings for one or both devices. If the user declines to configure settings, then the content may be presented on one or both CE devices (e.g. depending on the user's selection from the prompt 184) whereas if the user provides input (e.g. selecting “yes” on the prompt 186) to configure one or more settings, another UI such as the settings UI 120 of
Now in reference to
Concluding the detailed description in reference to
Now with no particular reference to any figure, it is to be understood that in some embodiments e.g., the magnification described above to assist e.g. a visually impaired person with observing a particular portion of presented AV content such as a person presented in an image may include e.g. only magnifying the head of the person, the person as a whole, two or more heads of people engaged in a conversation in the image, etc.
Also in some embodiments, audio impairment settings to be employed in accordance with present principles may include adjusting and/or configuring volume output, pitch and/or frequency on one of the CE devices such as e.g. making volume output on the subject CE device louder than output on the “companion” device and/or louder than a preset or pre-configuration of the AV content. Notwithstanding, it so to also be understood that in other instances to e.g. avoid a “stereo” audio effect the two devices may be configured such that audio is only output from one of the CE devices but not the other even if video of the AV content is presented on both.
It may now be appreciated based on the foregoing that e.g. daltonization of video (such as e.g. enhancing the distinction of green and/or red content) of AV content can assist with the viewing of AV content on a “companion” CE device to e.g. a TV that also presents the content. Additionally, closed captioning and other metadata may be presented on one of the CE devices (e.g. overlaid on a portion of the AV content) to further assist a person with a sensory impairment.
Note also that more than two CE devices may be configured and used in accordance with present principles. Also note that in embodiments where one CE device is a TV and the other is another type of CE device such as a tablet computer, e.g. the TV may present the “normal” content and audio while the tablet may present only video of the AV content that has been optimized for one or more sensory impairments, but also that the opposite may be true in that the TV may present the optimized video while the tablet may presented the “normal” content.
Addressing control of the AV content as it is presented on the two CE devices, note that e.g. if the user wishes to play, pause, stop, fast forward, rewind, etc. the content, the two CE devices may communicate with each other such that e.g. performing the function on one device (e.g. pressing “pause”) may cause that device to not only pause the content on it but also send a command to the other device to pause content such that the two contents are paused simultaneously or near-simultaneously in accordance with present principles to enhance the shared viewing experience of an AV content on two devices. In this respect, e.g. the two CE devices may be said to be “slaved” to each other such that an action occurring at one device occurs on both devices. Note further that e.g. should a set top box (e.g. and/or a home server) be providing the content to both devices, a fast forward command input to the set top box may cause the set top box to control the content as presented on each of the CE devices by causing fast forwarding on each one to appear to occur simultaneously or near simultaneously. Further still and as another example, should the content be paused, fast forwarded, etc. by manipulating a tablet computer “companion” device, gestures in free space recognizable as input by the tablet may be used to control presentation of the AV content on both devices.
As indicated above, AV content may be provided to each of the CE devices on which it is to be presented in a number of ways, with one version of the AV content being optimized for observance by a person with a sensory impairment. For example, a set top box may provide (e.g. stream) the content to each CE device even over different connections (e.g. HDMI for a TV and an IP connection such as Direct WiFi for a tablet computer to also present the content). As other examples, content may be delivered to the devices via the Internet, may be streamed from the Internet to one device and then forwarded to another device where the device receiving the content from the Internet manages the timing of presentation such that the content is presented simultaneously on both devices, may be independently streamed from a server or head end but still simultaneously presented, etc. Furthermore, e.g. in an instance where once device forwards the content to the other CE device, the CE device receiving the forwarded content may parse the content for metadata to then display e.g. closed captioning, magnify the content or at least a portion thereof to show people talking, etc., and/or daltonize a version of the content before forwarding it. Even further, present principles recognize that a content stream that is received by the “companion” device may have the metadata such as closed captioning already composited in the video (e.g. graphics displayed/overlaid on top of the video) as done by the forwarding device at the forwarding device, thus allowing the “companion” device to simply render the video on the screen to also convey the metadata. Also, note that even though closed captioning has been referenced herein, other types of metadata (e.g., displayed as certain type(s) of closed captioning) may be presented/overlaid on video in accordance with present principles such as e.g. plot information regarding the plot of the AV content (e.g. a plot synopsis, scene descriptions and/or scene synopsis, plot narration, etc.) to thus assist a cognitively impaired viewer with following and understanding what is occurring in the AV content.
In addition, e.g. when using a set top box, Internet, and/or a server in accordance with present principles, when providing content to two CE devices the Digital Living Network Alliance (DLNA) standard may be used, as may be e.g. UpNp protocols and W3C either in conjunction with or separately from DLNA standards. Also, e.g., the CE devices (e.g. their respective displays) may act as digital media renderers (DMRs) and/or digital media players (DMPs) and/or digital media control points (DMCs) that may interface with the set top box, the set top box acting as a digital media server (DMS) where e.g. the DMS would ensure that the same content was being streamed to both displays synchronously albeit at least one version of the content being optimized for observance based on a sensory impairment.
While the particular DUAL AUDIO VIDEO OUTPUT DEVICES WITH ONE DEVICE CONFIGURED FOR THE SENSORY IMPAIRED is herein shown and described in detail, it is to be understood that the subject matter which is encompassed by the present invention is limited only by the claims.