Electronic devices in the consumer, commercial, and industrial sectors may output video to displays, monitors, screens, and other devices capable of displaying visual media or content. Users may wish to serve as a moderator and transmit, replicate, or share content from one display to another, and may also wish to conserve power resources on a display.
Various examples described below provide for displaying, transmitting, replicating, and/or sharing display content based on a user eye gaze, such as a teacher in a classroom setting sharing content with students, or a speaker in a business environment sharing content with audience members, or a user serving as a moderator in general. Various examples described below also provide for improving display power management and/or reducing distractions by adjusting various display values and/or re-mapping display images or content based on a user eye gaze, including through local dimming on a backlight.
Generally, an electronic device such as a desktop computer, laptop computer, tablet, mobile device, retail point of sale device, or other device (hereinafter “device”) may connect to or communicate with a display, monitor, or screen (hereinafter “display”) to display content generated or output from the device. In some examples, the device may output content to multiple displays, such as in a dual panel setup. The device may render content, which may be further processed by, for example, a display controller embedded in a display.
According to some examples, the device may also connect to or communicate with other devices or displays to display content. In the example of a business presentation, a moderator's device such as a desktop computer may display content, such as windows of various software applications, which may be shared or replicated onto, for example, laptops of audience members in the classroom.
In such an example, the moderator's device may display multiple windows, such as a word processing document, a video, a spreadsheet, and/or a chart, and such windows may be displayed on a single display or across multiple displays at the direction of the moderator. A moderator may wish to share or replicate one of the windows or screen areas to audience members for display on their devices, or just the windows and/or desktop of one of the moderator's multiple displays. The moderator may also wish to frequently change the screen area, window, or content displayed to the audience members based on the moderator's shifting focus area or region of interest, without the need to input such changes via a mouse or other physical input device.
In such an example, a moderator may also wish to conserve power, either on the moderator's displays, or the displays of the audience members. For example, if a moderator's display is displaying multiple windows, but the moderator is focused on a particular screen area, window, or region of interest, the moderator may wish to dim or turn off the inactive areas of the moderator's display, and/or the audience member displays. Power saving may be especially important in the case of mobile displays where the power draw of a display is a major component of battery drain, and in the case of fixed displays of large size that have substantial power draws.
In another example, the moderator may wish to change the content displayed in the inactive areas on the displays to focus attention on an active window or screen area and reduce distractions from inactive windows or screen areas, and to reduce eye strain.
In the example of
Displays 104 and/or 106 may be a light emitting diode (“LED”) display, an organic light emitting diode (“OLED”) display, a projector, a mobile display, a holographic display, or any other display type capable of displaying an image or content from an electronic device.
Displays 104 and 106 may display an operating system desktop 116 and 118 with a taskbar and windows or screen areas 108, 110, 112, and 114. The display may also be coupled to a keyboard and mouse, or other devices or peripherals. Displays 104 and/or 106 may also comprise a camera, LED, or other sensor for detecting a user or users, distances between users and the displays, locations of users, and eye gazes. In some examples, the sensor may be mounted within the bezel of the display, as shown in
In the example of
In the example of
More specifically, in the example of
In the example of
In such an example, window 106 may be displayed full-screen on the devices 126, 128, and 130 of users 120, 122, and 124 as windows 132, 134, and 136. In other examples, the windows 132, 134, and 136 may mirror the relative size and relative location of window 114 on display 106, or may be selectively controllable by users 120, 122, and 124.
In the example of
In such an example, window 106 may be displayed on devices 126, 128, and 130 of users 120, 122, and 124 as windows 132, 134, and 136, along with the remainder of the content displayed on display 106. In some examples, the content displayed on laptop displays 126, 128, and 130 may be displayed with the inactive screen areas of display 106 powered on and at full brightness, while in other examples the displays of devices 126, 128, and 130 may mirror display 106.
In
In one example, the inactive screen area of display 206 may be turned off, i.e., a device attached to display 206 may instruct a display controller of display 206 to adjust a backlight, OLED, or other illumination component to disable or power off the inactive screen area, e.g., at a region level, grid level, or pixel level. In another example, the inactive screen area of display 204 may be dimmed, but not turned off.
In an example of an LED display, to enable local dimming, an input image may be analyzed by a processer and optimized backlight illumination patterns may be generated based on the calibrated data from the backlight illumination patterns from each of the independent LCD strings. The display image may then be remapped based on the original image and the backlight illumination pattern. A spatial profile may be used as input to the local dimming analysis.
In other examples, the inactive screen area may remain powered on, but may be altered such as by adjusting a color saturation, contrast level, or other display property of the inactive screen area to focus a user's attention on the active screen area, e.g., window 210 in the example of
According to other examples, a peripheral area of the screen outside of the active screen area may be determined, with a change to the color saturation, contrast level, or other display property applied accordingly, e.g., as a gradient toward the extreme edge of the periphery. In such examples, the overall brightness average of the screen may be lowered, resulting in power savings.
According to another set of examples, a pattern may be applied to the inactive screen area or the peripheral areas outside the active screen area. Examples of such patterns may include geometric patterns, radial patterns and/or grid patterns, photos, or other patterns or images to focus a user's attention toward an active screen area. Applying a pattern may include re-mapping an image based on, for example, a backlight unit illumination pattern and the original image, factoring in any constraints of the backlight. Patterns or images may also be selected from a database based on input such as the active screen area window type, color saturation, or other properties of the active or inactive screen areas.
In some examples, to improve user experience, a temporal profile may be determined or fetched to minimize or transition the impact of a change in power state, brightness, color saturation, contrast level, other display property, pattern application, or re-mapping. In other examples, a spatial profile may be determined, e.g., based on signal processing, or fetched to minimize flashing or halo effects. In other examples, temporal profiles and/or spatial profiles may be combined with a user interface design rule to determine an appropriate delta between a brightness level of an active screen area and an inactive screen area, or whether center-to-edge shading should be applied, as examples.
In some examples, a minimum time interval, such as a power-save time interval, may also be enforced. For example, an active screen area, region of interest, or focus area may be determined once an eye gaze has been detected on a particular screen area for a minimum amount of time without interruption. In the example of
In the example of
According to another example, a second display may be added to the monitor configuration of
In block 302, a camera or other sensor coupled to a display may detect a user in proximity to the display. In block 304, a processing resource, e.g., a processor, coupled to the camera may determine a primary user and a primary user eye gaze.
In block 306, an active screen area and an inactive screen area are determined based on the primary user eye gaze. In block 308, a power-save time interval is fetched.
In block 310, an active screen area is transmitted to a remote display. In block 312, a display hardware driver is instructed to alter an inactive screen area render in the event that the power-save time interval is satisfied. Altering the inactive screen area may comprise altering a power state, brightness, color saturation, contrast level, other display property, pattern application, or re-mapping, as discussed above.
In an example, device 400 comprises a processing resource such as processor or CPU 402; a non-transitory computer-readable storage medium 404, a display controller 406, a memory 408, and a camera or other sensor 410. In some examples, device 400 may also comprise a memory resource such as memory, RAM, ROM, or Flash memory; a disk drive such as a hard disk drive or a solid state disk drive; an operating system; and a network interface such as a Local Area Network LAN card, a wireless 802.11x LAN card, a 3G or 4G mobile WAN, or a WiMax WAN card. Each of these components may be operatively coupled to a bus.
Some or all of the operations set forth in the figures may be contained as a utility, program, or subprogram in any desired computer readable storage medium, or embedded on hardware. The computer readable medium may be any suitable medium that participates in providing instructions to the processing resource 402 for execution. For example, the computer readable medium may be non-volatile media, such as an optical or a magnetic disk, or volatile media, such as memory. The computer readable medium may also store other machine-readable instructions, including instructions downloaded from a network or the internet.
In addition, the operations may be embodied by machine-readable instructions. For example, they may exist as machine-readable instructions in source code, object code, executable code, or other formats.
Device 400 may comprise, for example, a computer readable medium that may comprise instructions 412 to display an original image; receive detection data associated with a primary user; determine a primary user and a primary user eye gaze based on the detection data; determine a region of interest in the original image based on the primary user eye gaze; and generate a remapped image for display based on the original image, the determined region of interest, and an illumination pattern.
The computer-readable medium may also store an operating system such as Microsoft Windows, Mac OS, Unix, or Linux; network applications such as network interfaces and/or cloud interfaces; and a cloud service, monitoring tool, or metrics tool, for example. The operating system may be multi-user, multiprocessing, multitasking, and/or multithreading. The operating system may also perform basic tasks such as recognizing input from input devices, such as a keyboard or a keypad; sending output to a display; keeping track of files and directories on a medium; controlling peripheral devices, such as drives, printers, or image capture devices; and/or managing traffic on a bus. The network applications may include various components for establishing and maintaining network connections, such as machine readable instructions for implementing communication protocols including, but not limited to, TCP/IP, HTTP, Ethernet, USB, and FireWire.
In certain examples, some or all of the processes performed herein may be integrated into the operating system. In certain examples, the processes may be at least partially implemented in digital electronic circuitry, in computer hardware, in machine readable instructions, or in any combination thereof.
The above discussion is meant to be illustrative of the principles and various examples of the present disclosure. It is intended that the following claims be interpreted to embrace all such variations and modifications.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US15/28463 | 4/30/2015 | WO | 00 |