Computing devices generally display two-dimensional user interfaces using displays with two-dimensional display screens. When such two-dimensional displays are viewed from any angle other than perpendicular to the display screen, the viewer may experience visual distortion from the change in perspective. However, certain classes of computing devices are often viewed from angles other than perpendicular to the display screen. For example, tablet computers are often used while resting flat on a table top surface. Similarly, some computing devices embed their display in the top surface of a table-like device (e.g., the Microsoft® PixelSense™).
Several available technologies are capable of tracking the location of a user's head or eyes. A camera with appropriate software may be capable of discerning a user's head or eyes. More sophisticated sensors may supplement the camera with depth sensing hardware to detect the location of the user in three dimensions. Dedicated eye-tracking sensors also exist, which can provide information on the location of a user's eyes and the direction of the user's gaze.
The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.
While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.
References in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.
Referring now to
By applying such corrective distortion to the content to improve the viewing perspective of the content for a viewing angle relative to the viewer, the computing device 100 allows the viewer to view the display 132 of the computing device 100 from any desired position while maintaining the viewing perspective of the displayed content similar to the viewing perspective when viewing the content perpendicular to the display 132. For example, the viewer may rest the computing device 100 flat on a table top and use the computing device 100 from a comfortable seating position, without significant visual distortion, and without leaning over the computing device 100.
The computing device 100 may be embodied as any type of computing device having a display, or coupled to a display, and capable of performing the functions described herein. For example, the computing device 100 may be embodied as, without limitation, a tablet computer, a table-top computer, a notebook computer, a desktop computer, a personal computer (PC), a laptop computer, a mobile computing device, a smart phone, a cellular telephone, a handset, a messaging device, a work station, a network appliance, a web appliance, a distributed computing system, a multiprocessor system, a processor-based system, a consumer electronic device, a digital television device, a set-top box, and/or any other computing device configured to determine one or more viewing angles for a viewer of the content and improve the viewing perspective of the content based on the one or more viewing angles.
In the illustrative embodiment of
The processor 120 of the computing device 100 may be embodied as any type of processor capable of executing software/firmware, such as a microprocessor, digital signal processor, microcontroller, or the like. The processor 120 is illustratively embodied as a single core processor having a processor core 122. However, in other embodiments, the processor 120 may be embodied as a multi-core processor having multiple processor cores 122. Additionally, the computing device 100 may include additional processors 120 having one or more processor cores 122.
The I/O subsystem 124 of the computing device 100 may be embodied as circuitry and/or components to facilitate input/output operations with the processor 120 and/or other components of the computing device 100. In some embodiments, the I/O subsystem 124 may be embodied as a memory controller hub (MCH or “northbridge”), an input/output controller hub (ICH or “southbridge”), and a firmware device. In such embodiments, the firmware device of the I/O subsystem 124 may be embodied as a memory device for storing Basic Input/Output System (BIOS) data and/or instructions and/or other information (e.g., a BIOS driver used during booting of the computing device 100). However, in other embodiments, I/O subsystems having other configurations may be used. For example, in some embodiments, the I/O subsystem 124 may be embodied as a platform controller hub (PCH). In such embodiments, the memory controller hub (MCH) may be incorporated in or otherwise associated with the processor 120, and the processor 120 may communicate directly with the memory 126 (as shown by the hashed line in
The processor 120 is communicatively coupled to the I/O subsystem 124 via a number of signal paths. These signal paths (and other signal paths illustrated in
The memory 126 of the computing device 100 may be embodied as or otherwise include one or more memory devices or data storage locations including, for example, dynamic random access memory devices (DRAM), synchronous dynamic random access memory devices (SDRAM), double-data rate synchronous dynamic random access memory device (DDR SDRAM), mask read-only memory (ROM) devices, erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM) devices, flash memory devices, and/or other volatile and/or non-volatile memory devices. The memory 126 is communicatively coupled to the I/O subsystem 124 via a number of signal paths. Although only a single memory device 126 is illustrated in
The data storage 128 may be embodied as any type of device or devices configured for the short-term or long-term storage of data. For example, the data storage 128 may include any one or more memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices.
In some embodiments, the computing device 100 may also include one or more peripheral devices 130. Such peripheral devices 130 may include any number of additional input/output devices, interface devices, and/or other peripheral devices. For example, in some embodiments, the peripheral devices 130 may include a display, touch screen, graphics circuitry, keyboard, mouse, speaker system, and/or other input/output devices, interface devices, and/or peripheral devices.
In the illustrative embodiment, the computing device 100 also includes a display 132 and, in some embodiments, may include viewer location sensor(s) 136 and a viewing angle input 138. The display 132 of the computing device 100 may be embodied as any type of display capable of displaying digital information such as a liquid crystal display (LCD), a light emitting diode (LED), a plasma display, a cathode ray tube (CRT), or other type of display device. Regardless of the particular type of display, the display 132 includes a display screen 134 on which the content is displayed. In some embodiments, the display screen 134 may be embodied as a touch screen to facilitate user interaction.
The viewer location sensor(s) 136 may be embodied as any one or more sensors capable of determining the location of the viewer's head and/or eyes, such as a digital camera, a digital camera coupled with an infrared depth sensor, or an eye tracking sensor. For example, the viewer location sensor(s) 136 may be embodied as a wide-angle, low-resolution sensor such as a commodity digital camera capable of determining the location of the viewer's head. Alternatively, the viewer location sensor(s) 136 may be embodied as a more precise sensor, for example, an eye tracking sensor. The viewer location sensor(s) 136 may determine only the direction from the computing device 100 to the viewer's head and/or eyes, and are not required to determine the distance to the viewer.
The viewing angle input 138 may be embodied as any control capable of allowing the viewer to manually adjust the desired viewing angle, such as a hardware wheel, hardware control stick, hardware buttons, or a software control such as a graphical slider. In embodiments including the viewing angle input 138, the computing device 100 may or may not include the view location sensor(s) 136.
Referring now to
The viewing angle determination module 202 is configured to determine one or more viewing angles of the content relative to a viewer of the content. In some embodiments, the viewing angle determination module 202 may receive data from the viewing location sensor(s) 136 and determine the viewing angle(s) based on the received data. Alternatively or additionally, the viewing angle determination module 202 may receive viewing angle input data from the viewing angle input 138 and determine the viewing angle(s) based on the viewing angle input data. For example, in some embodiments, the viewing angle input data received from the viewing angle input 138 may override, or otherwise have a higher priority than, the data received from the viewer location sensor(s) 136. Once determined, the viewing angle determination module 202 supplies the determined one or more viewing angles to the content transformation module 204.
The content transformation module 204 generates a content transformation for each of the one or more viewing angles determined by the viewing angle determination module 202 as a function of the one or more viewing angles. The content transformation is useable to apply a corrective distortion to the content, to improve the viewing perspective of the content when viewed at the one or more viewing angles. The content transformation may be embodied as any type of transformation that may be applied to the content. For example, in some embodiments, the content transformation may be embodied as an equation, an algorithm, a raw number, a percentage value, or other number or metric that defines, for example, the magnitude to which the content, or portion thereof, is stretched, cropped, compressed, duplicated, or otherwise modified. The generated content transformation is used by the content rendering module 206.
The content rendering module 206 renders the content as a function of the content transformation generated by the content transformation module 204. The rendered content may be generated by an operating system of the computing device 100, generated by one or more user applications executed on the computing device 100, or embodied as content (e.g., pictures, text, or video) stored on the computing device 100. For example, in some embodiments, the rendered content may be generated by, or otherwise in, a graphical browser such as a web browser executed on the computing device 100. The content may be embodied as content stored in a hypertext markup language (HTML) format for structuring and presenting content, such as HTML5 or earlier versions of HTML. The rendered content is displayed on the display screen 134 of the display 132.
Referring now to
In block 304, the viewing angle determination module 202 determines a primary viewer of the content. To do so, the viewing angle determination module 202 utilizes the viewer location sensor(s) 136. When only one viewer is present, the primary viewer is simply the sole viewer. However, when two or more viewers are present, the viewing angle determination module 202 may be configured to determine or select one of the viewers as the primary viewer for which the viewing perspective of the content is improved. For example, the viewing angle determination module 202 may determine the primary viewer by detecting which viewer is actively interacting with the computing device 100, by selecting the most proximate viewer to the display screen 134, randomly determining the primary viewer from the pool of detected viewers, based on pre-defined criteria or input supplied to the computing device 100, or by any other suitable technique.
In block 306, the viewing angle determination module 202 determines the location of the primary viewer relative to the display screen 134 of the display 132. To do so, the viewing angle determination module 202 uses the sensor signals received from viewer location sensor(s) 136 and determines the location of the primary viewer based on such sensor signals. In the illustrative embodiment, the viewing angle determination module 202 determines the location of the primary viewer by determining the location of the viewer's head and/or eyes. However, the precise location of the viewer's eyes is not required in all embodiments. Any suitable location determination algorithm or technique may be used to determine the location of the primary viewer relative to the display screen 134. In some embodiments, the location of the viewer is determined only in one dimension (e.g., left-to-right relative to the display screen 134). In other embodiments, the location of the viewer may be determined in two dimensions (e.g., left-to-right and top-to-bottom relative to the display screen 134). Further, in some embodiments, the location of the viewer may be determined in three dimensions (e.g., left-to-right, top-to-bottom, and distance from the display screen 134).
In block 308, the viewing angle determination module 202 determines one or more viewing angles of the content relative to the viewer. For example, referring to
Referring back to
Alternatively, in some embodiments, the viewing angle determination module 202 may determine only a single, primary viewing angle as a function of the location of the viewer and a pre-defined content location on the display screen 134 of the display 132 in block 312. In some embodiments, the pre-defined content location is selected to be located at or near the center of the display screen 134 of the display 132. For example, as shown in
Referring back to
After one or more viewing angles are determined in block 308, the method 300 advances to block 316. In block 316, the content transformation module 204 generates a content transformation useable to apply a corrective distortion to the content, to improve the viewing perspective of the content when viewed at the one or more viewing angles. In some embodiments, the content transformation is embodied as a uniform transformation configured to uniformly transform the content regardless of the magnitude of the particular viewing angle. In such embodiments, the content transformation transforms the content as a function of the primary viewing angle determined in block 310, which approximates the other viewing angles. Alternatively, the content transformation module 204 may generate a non-uniform content transformation. That is, the content transformation module 204 may generate a unique content transformation for each viewing angle of the one or more viewing angles determined in block 310 or block 314.
As discussed above, the content transformation may be embodied as any type of transformation that may be applied to the content. In some embodiments, the content transformation may scale the content along an axis to thereby intentionally distort the content and improve the viewing perspective. For example, given a viewing angle α between the location of the viewer and a particular content location, the distortion of the content as seen by the viewer can be approximated as the sine of the viewing angle, that is, as sin(α). Such perceived distortion may be corrected by stretching the content—that is, applying a corrective distortion—by an appropriate amount along the axis experiencing the perceived distortion. For example, considering a tablet computing device laying flat on a table, when viewed by a viewer from a seated position, the displayed content may appear distorted in the vertical content axis (e.g., along the visual axis of the viewer). For example, as shown in
Alternatively, the content transformation may compress content an appropriate amount along an axis perpendicular to the axis experiencing the distortion (e.g., perpendicular to the viewing axis). For example, considering again the tablet computing device laying flat on a table and viewed from a seated position, displayed content may appear distorted in the vertical content axis, which distortion could be corrected by compressing the content horizontally. For example, as shown in
Further, in embodiments in which the content is embodied as or includes text, the content transformation may modify the viewing perspective by increasing the vertical height of the rendered text. Such transformation may be appropriate for primarily textual content or for use on a computing device with limited graphical processing resources, for example, an e-reader device. For example, such transformation may be appropriate for content stored in a hypertext markup language format such as HTML5.
In some embodiments, the content transformation may transform content along more than one axis to improve the viewing perspective. For example, each content location may be scaled an appropriate amount along each axis (which may be orthogonal to each other in some embodiments) as a function of the viewing angle associated with each content location. Such content transformation is similar to the inverse of the well-known “keystone” perspective correction employed by typical visual projectors to improve viewing perspective when projecting onto a surface at an angle.
After the content transformation has been generated in block 316, the content rendering module 206 renders the content using the content transformation in block 318. For conventional display technologies, the content rendering module 206 may apply the content transformation to an in-memory representation of the content and then rasterize the content for display on the display screen of the display 132. Alternative embodiments may apply the content transformation by physically deforming the pixels and/or other display elements of the display screen 134 of the display 132 (e.g., in those embodiments in which the display screen 134 is deformable).
After the content is rendered, the method 300 loops back to block 302 in which the computing device 100 determines whether to automatically adjust rendering based on a content viewing angle(s) of a viewer. In this way, the perspective correction may continually adapt to changes in viewing angle through an iterative process (e.g., when the viewer or computing device move to a new, relative location).
Referring back to block 302, if the computing device 100 determines not to automatically adjust rendering based on viewing angle, the method 300 advances to block 320. In block 320, the computing device 100 determines whether to manually adjust rendering based on viewing angle. Such determination may be made in use, may be pre-configured (e.g., with a hardware or software switch), or may be dependent on whether the computing device 100 includes the viewing angle input 138. If the computing device 100 determines not to manually adjust rendering based on viewing angle, the method 300 advances to block 322, in which the computing device 100 displays content as normal (i.e., without viewing perspective correction).
If, in block 320, the computing device 100 does determine to manually adjust rendering based on the viewing angle, the method 300 advances to block 324. In block 324, the viewing angle determination module 202 receives viewing angle input data from the viewing angle input 138. As described above, the viewing angle input 138 may be embodied as a hardware or software user control, which allows the user to specify a viewing angle. For example, the viewing angle input 138 may be embodied as a hardware thumbwheel that the viewer rotates to select a viewing angle. Alternatively, the viewing angle input 138 may be embodied as a software slider that the viewer manipulates to select a viewing angle. In some embodiments, the viewing angle input 138 may include multiple controls allowing the viewer to select multiple viewing angles.
In block 326, the viewing angle determination module 202 determines one or more viewing angles based on the viewing angle input data. To do so, in block 328, the viewing angle determination module 202 may determine a viewing angle for each content location as a function of the viewing angle input data. For example, the viewing angle input data may include multiple viewing angles selected by the user using multiple viewing angle input controls 138. The determination of multiple viewing angles may be desirable for large, immovable computing devices usually viewed from the same location such as, for example, table-top computers or the like.
Alternatively, in block 330, the viewing angle determination module 202 may determine a primary viewing angle as a function of the viewing angle input data and a pre-defined content location on the display screen of the display 132. As discussed above, the pre-defined content area may be embodied as the center of the display screen 134 of the display 132 in some embodiments. Additionally, in some embodiments, the viewing angle input 138 may allow the viewer to directly manipulate the primary viewing angle. As discussed above, in those embodiments utilizing a uniform content transformation, only the primary viewing angle may be determined in block 326.
Further, in some embodiments, the viewing angle determination module 202 may extrapolate the remaining viewing angles as a function of the primary viewing angle determined in block 330 and each pre-defined content location on the display screen. For example, the viewing angle determination module 202 may have access to the physical dimensions of the display screen of the display 132 or the dimensions of the computing device 100. Given a single, primary viewing angle and those dimensions, the viewing angle determination module 202 may be able to calculate the viewing angle corresponding to each remaining content location.
After one or more viewing angles are determined in block 326, method 300 advances to block 316. As discussed above, the content transformation module 204 generates a content transformation useable to apply a corrective distortion to the content, to improve the viewing perspective of the content when viewed at the determined one or more viewing angles in block 316. After the content transformation has been generated in block 316, the content rendering module 206 renders the content using the content transformation in block 318 as discussed above. After the content is rendered, the method 300 loops back to block 302 in which the computing device 100 determines whether to automatically adjust rendering based on a content viewing angle(s) of a viewer.
Illustrative examples of the devices, systems, and methods disclosed herein are provided below. An embodiment of the devices, systems, and methods may include any one or more, and any combination of, the examples described below.
In one example, a computing device to improve viewing perspective of content displayed on the computing device may include a display having a display screen on which content can be displayed, a viewing angle determination module, a content transformation module, and a content rendering module. In an example, the viewing angle determination module may determine one or more viewing angles of the content relative to a viewer of the content. In an example, the content transformation module may generate a content transformation for each one or more viewing angles as a function of the corresponding one or more viewing angles, the content transformation usable to apply a corrective distortion to the content to improve the viewing perspective of the content when viewed at the one or more viewing angles. In an example, the content rendering module may render, on the display screen, content as a function of the content transformation. In an example, to render content as a function of the content transformation may include to render content represented in a hypertext markup language format selected from the group consisting of: HTML, XHTML, and HTML5.
In an example, to generate the content transformation as a function of the one or more viewing angles may include to generate a uniform content transformation as a function of a single viewing angle of the one or more viewing angles, and to render the content may include to render the content using the uniform content transformation.
Additionally, in an example, the computing device may include a viewer location sensor. In an example, to determine one or more viewing angles may include to determine a location of a primary viewer as a function of sensor signals received from the viewer location sensor, and to determine a primary viewing angle as a function of the determined location of the primary viewer and a pre-defined content location on the display screen of the display. Additionally, in an example, the computing device may include a viewing angle input controllable by a user of the computing device. In an example, to determine one or more viewing angles may include to receive viewing angle input data from the viewing angle input, and to determine a primary viewing angle as a function of the viewing angle input data and a pre-defined content location on the display screen of the display. Additionally, in an example the pre-defined content location may include a center point of the display screen of the display.
In an example, to render content as a function of the content transformation may include to stretch the content along a reference axis parallel to the display screen of the display as a function of the primary viewing angle. In an example, to stretch the content may include to scale the content by a stretch factor calculated by dividing a length of the content defined in the direction of the reference axis by a sine of the primary viewing angle. Additionally, in an example, to render the content as a function of the content transformation may include to compress the content along a reference axis parallel to the display screen of the display as a function of the primary viewing angle. In an example, to compress the content may include to scale the content by a compression factor calculated by multiplying a length of the content defined in the direction of the reference axis by a sine of the primary viewing angle. Additionally, in an example, to render the content as a function of the content transformation may include to increase a height property of text of the content as a function of the primary viewing angle.
Additionally, in an example, to generate the content transformation as a function of the one or more viewing angles may include to generate a unique content transformation for each viewing angle of the one or more viewing angles. In an example, to render the content may include to render the content using the unique content transformation corresponding to each viewing angle of the one or more viewing angles.
Additionally, in an example, the computing device may include a viewer location sensor. In an example, to determine one or more viewing angles may include (i) to determine a location of a primary viewer as a function of sensor signals received from the viewer location sensor, and (ii) to determine a viewing angle for one or more content locations on the display screen of the display as a function of the determined location of the primary viewer and a reference plane defined by the display screen of the display. In an example, each content location may include a single pixel of the display screen. Additionally, in an example, each content location may include a group of pixels of the display screen. Additionally, in an example, to determine one or more viewing angles further may include to determine the primary viewer from a plurality of viewers of the content.
Additionally, in an example, the computing device may include a viewing angle input controllable by a user of the computing device. In an example, to determine one or more viewing angles may include to receive viewing angle input data from the viewing angle input. In an example, to generate the content transformation may include to generate the content transformation as a function of the viewing angle input data.
Additionally, in an example, the computing device may include a viewer location sensor. In an example, to determine one or more viewing angles may include to determine a location of a primary viewer as a function of sensor signals received from the viewer location sensor, to determine a primary viewing angle as a function of the determined location of the primary viewer and a pre-defined content location on the display screen of the display, and to extrapolate a viewing angle for additional content locations on the display screen as a function of the primary viewing angle and each corresponding content location.
Additionally, in an example, the computing device may include a viewing angle input controllable by a user of the computing device. In an example, to determine one or more viewing angles may include to receive viewing angle input data from the viewing angle input, to determine a primary viewing angle as a function of the viewing angle input data and a pre-defined content location on the display screen of the display, and to extrapolate a viewing angle for additional content locations on the display screen as a function of the primary viewing angle and each corresponding content location.
In an example, to render content as a function of the content transformation may include to stretch the content along a reference axis parallel to the display screen of the display as a function of the one or more viewing angles. In an example, to stretch the content may include to scale the content at each content location on the display screen by a stretch factor calculated by dividing a length of the corresponding content location defined in the direction of the reference axis by a sine of the viewing angle corresponding to that content location. In an example, to stretch the content may include to deform each content location. In an example, the reference axis may be a height axis of the content. In an example, the reference axis may be a width axis of the content.
Additionally, in an example, to render content as a function of the content transformation may include to compress the content at each content location on the display screen along a reference axis parallel to the display screen of the display as a function of the corresponding content location and the viewing angle associated with the corresponding content location. In an example, to compress the content may include to scale the content at each content location on the display screen by a compression factor calculated by multiplying a length of the corresponding content location defined in the direction of the reference axis by a sine of the viewing angle corresponding to that content location. In an example, to compress the content may include to deform each content location.
Additionally, in an example, to render the content as a function of the content transformation may include to scale the content along a first axis parallel to the display screen of the display as a function of the content location on the display screen of the display and the viewing angle corresponding to the content location, and to scale the content along a second axis perpendicular to the first axis and parallel to the display screen of the display as a function of the content location on the display screen of the display and the viewing angle corresponding to the content location. Additionally, in an example, to render content as a function of the content transformation may include to perform an inverse keystone three-dimensional perspective correction on the content.
In another example, a method for improving viewing perspective of content displayed on a computing device may include determining, on the computing device, one or more viewing angles of the content relative to a viewer of the content; generating, on the computing device, a content transformation for each one or more viewing angles as a function of the corresponding one or more viewing angles, the content transformation usable to apply a corrective distortion to the content to improve the viewing perspective of the content when viewed at the one or more viewing angles; and rendering, on a display screen of a display of the computing device, content as a function of the content transformation. In an example, rendering content as a function of the content transformation may include rendering content represented in a hypertext markup language format selected from the group consisting of: HTML, XHTML, and HTML5.
In an example, generating the content transformation as a function of the one or more viewing angles may include generating a uniform content transformation as a function of a single viewing angle of the one or more viewing angles. In an example, rendering the content may include rendering the content using the uniform content transformation.
Additionally, in an example, determining one or more viewing angles may include determining, on the computing device, a location of a primary viewer as a function of sensor signals received from a viewer location sensor of the computing device; and determining, on the computing device, a primary viewing angle as a function of the determined location of the primary viewer and a pre-defined content location on the display screen of the display. Additionally, in an example, determining one or more viewing angles may include receiving, on the computing device, viewing angle input data from a viewing angle input of the computing device, the viewing angle input being controllable by a user of the computing device; and determining, on the computing device, a primary viewing angle as a function of the viewing angle input data and a pre-defined content location on the display screen of the display. Additionally, in an example, the pre-defined content location may include a center point of the display screen of the display.
In an example, rendering content as a function of the content transformation may include stretching the content along a reference axis parallel to the display screen of the display as a function of the primary viewing angle. In an example, stretching the content may include scaling the content by a stretch factor calculated by dividing a length of the content defined in the direction of the reference axis by a sine of the primary viewing angle. Additionally, in an example, rendering content as a function of the content transformation may include compressing the content along a reference axis parallel to the display screen of the display as a function of the primary viewing angle. In an example, compressing the content may include scaling the content by a compression factor calculated by multiplying a length of the content defined in the direction of the reference axis by a sine of the primary viewing angle. Additionally, in an example, rendering the content as a function of the content transformation may include increasing a height property of text of the content as a function of the primary viewing angle.
Additionally, in an example, generating the content transformation as a function of the one or more viewing angles may include generating a unique content transformation for each viewing angle of the one or more viewing angles. In an example, rendering the content may include rendering the content using the unique content transformation corresponding to each viewing angle of the one or more viewing angles.
Additionally, in an example, determining one or more viewing angles may include determining, on the computing device, a location of a primary viewer as a function of sensor signals received from a viewer location sensor of the computing device; and determining, on the computing device, a viewing angle for one or more content locations on the display screen of the display as a function of the determined location of the primary viewer and a reference plane defined by the display screen of the display. In an example, each content location may include a single pixel of the display screen. Additionally, in an example, each content location may include a group of pixels of the display screen. Additionally, in an example, determining one or more viewing angles further may include determining, on the computing device, the primary viewer from a plurality of viewers of the content.
Additionally, in an example, determining one or more viewing angles may include receiving, on the computing device, viewing angle input data from a viewing angle input of the computing device, the viewing angle input being controllable by a user of the computing device. In an example, generating the content transformation may include generating the content transformation as a function of the viewing angle input data.
Additionally, in an example, determining one or more viewing angles may include determining, on the computing device, a location of a primary viewer as a function of sensor signals received from a viewer location sensor of the computing device; determining, on the computing device, a primary viewing angle as a function of the determined location of the primary viewer and a pre-defined content location on the display screen of the display; and extrapolating, on the computing device, a viewing angle for additional content locations on the display screen as a function of the primary viewing angle and each corresponding content location.
Additionally, in an example, determining one or more viewing angles may include receiving, on the computing device, viewing angle input data from a viewing angle input of the computing device, the viewing angle input being controllable by a user of the computing device; determining, on the computing device, a primary viewing angle as a function of the viewing angle input data and a pre-defined content location on the display screen of the display; and extrapolating, on the computing device, a viewing angle for additional content locations on the display screen as a function of the primary viewing angle and each corresponding content location.
In an example, rendering content as a function of the content transformation may include stretching the content along a reference axis parallel to the display screen of the display as a function of the one or more viewing angles. In an example, stretching the content may include scaling the content at each content location on the display screen by a stretch factor calculated by dividing a length of the corresponding content location defined in the direction of the reference axis by a sine of the viewing angle corresponding to that content location. In an example, stretching the content may include deforming each content location. In an example, the reference axis may be a height axis of the content. In an example, the reference axis may be a width axis of the content.
Additionally, in an example, rendering content as a function of the content transformation may include compressing the content at each content location on the display screen along a reference axis parallel to the display screen of the display as a function of the corresponding content location and the viewing angle associated with the corresponding content location. In an example, compressing the content may include scaling the content at each content location on the display screen by a compression factor calculated by multiplying a length of the corresponding content location defined in the direction of the reference axis by a sine of the viewing angle corresponding to that content location. In an example, compressing the content may include deforming each content location.
Additionally, in an example, rendering content as a function of the content transformation may include scaling the content along a first axis parallel to the display screen of the display as a function of the content location on the display screen of the display and the viewing angle corresponding to the content location, and scaling the content along a second axis perpendicular to the first axis and parallel to the display screen of the display as a function of the content location on the display screen of the display and the viewing angle corresponding to the content location. Additionally, in an example, rendering content as a function of the content transformation may include performing an inverse keystone three-dimensional perspective correction on the content.
Number | Name | Date | Kind |
---|---|---|---|
5303337 | Ishida | Apr 1994 | A |
5796426 | Gullichsen et al. | Aug 1998 | A |
6271875 | Shimizu et al. | Aug 2001 | B1 |
6628283 | Gardner | Sep 2003 | B1 |
6877863 | Wood et al. | Apr 2005 | B2 |
7042497 | Gullichsen et al. | May 2006 | B2 |
7873233 | Kadantseva et al. | Jan 2011 | B2 |
8285077 | Fero et al. | Oct 2012 | B2 |
8417057 | Oh et al. | Apr 2013 | B2 |
8885972 | Kacher et al. | Nov 2014 | B2 |
20020149808 | Pilu | Oct 2002 | A1 |
20070052708 | Won et al. | Mar 2007 | A1 |
20080309660 | Bertolami et al. | Dec 2008 | A1 |
20100064259 | Alexanderovitc et al. | Mar 2010 | A1 |
20110279446 | Castro et al. | Nov 2011 | A1 |
20130044124 | Reichert, Jr. | Feb 2013 | A1 |
Number | Date | Country |
---|---|---|
100908123 | Jul 2009 | KR |
1020100030749 | Mar 2010 | KR |
2013138632 | Sep 2013 | WO |
Entry |
---|
“Eye tracking,” Wikipedia, The Free Encyclopedia, retrieved from: <http://en.wikipedia.org/w/index.php?title=Eye—tracking&oldid=486865540>, edited Apr. 11, 2012, 7 pages. |
Viola et al., “Rapid Object Detection Using a Booted Cascade of Simple Features”, Accepted Conference on Computer Vision and Pattern Recognition, 2001, 9 pages. |
U.S. Appl. No. 13/631,519, filed Sep. 28, 2012, 31 pages. |
International Search Report and Written Opinion received for PCT Application No. PCT/US2013/062408, mailed on Jan. 28, 2014, 15 pages. |
Number | Date | Country | |
---|---|---|---|
20140092142 A1 | Apr 2014 | US |