The present disclosure is related generally to wireless communication devices and, more particularly, to methods for correcting perspective on an electronic device.
Traditionally, video displays (e.g., smartphone screens) render images under the assumption that the viewer will look at the image orthogonally. That is, the viewer will generally perceive each portion of the display as orthogonal to the plane of the viewer's line of sight. However, with the advent of newer types of electronic devices, such as wearable devices (e.g., smart watches), and with the introduction of so-called flexible displays, this assumption may no longer be valid.
While the appended claims set forth the features of the present techniques with particularity, these techniques, together with their objects and advantages, may be best understood from the following detailed description taken in conjunction with the accompanying drawings of which:
As used herein, the term “image” includes a still image, moving image, portion of a still image, and portion of a moving image, including an image (e.g., windows, text, menus, buttons, and icons) rendered by a graphical user interface. Also, as used herein, the term “mapping” refers to an operation that associates an element of a given set (e.g., a set of logical pixels) with one or more elements of a second set (e.g., a set of physical pixels).
This disclosure is generally directed to a method for correcting the perspective of an image. According to various embodiments, the method, which is carried out on an electronic device having a display, involves mapping logical pixels of the image to physical pixels of the display based on the expected viewing angle of the location (e.g., the screen location) of the display at which the logical pixels are to be rendered. In one embodiment, the electronic device maps a first set of logical pixels of the image to a first set of physical pixels of the display at a first ratio (e.g., number of logical pixels per physical pixel) and maps a second set of logical pixels of the image to a second set of physical pixels of the display at a second ratio, which is different from the first ratio. The effect of this mapping, according to various embodiments, is to make the apparent size of certain portions of the image larger in order to correct for perspective distortion caused by the viewing angle at which the image is viewed.
In various embodiments, the electronic device applies perspective correction to purposefully distort the “correct” image data as it is rendered by the device such that the image appears to be non-distorted to the user when the user is viewing the image and interacting with the electronic device at a non-orthogonal angle. In other words, the non-orthogonal viewing angle naturally distorts the image (e.g., the shapes and angles). Thus, the electronic device compensates for this distortion by “distorting” the image in the opposite way. In one embodiment, the electronic device carries out perspective correction by rasterizing logical pixels of an image in a non-square, non-equal manner onto physical pixels of the display.
In some embodiments, when a user views the display at a non-orthogonal angle (i.e., oblique), the images (if uncorrected) appear dimmer and bluer to the user. The approximate distortion caused by the display is known in advance and is based on (1) the shape of the surface of the display, and (2) on the expected viewing angle of the display to the user when the electronic device is in the most comfortable position with respect to the user. Based on these factors, the electronic device can digitally adjust the logical pixels as a function of their screen position, then render them unequally using physical pixels. The content itself need not be modified. Thus, photos, videos, maps, and apps need not be changed. For example, a look-up table (“LUT”) that matches the angles of the display can be predefined, stored in memory, and subsequently used by the electronic device.
In an embodiment, the electronic device maps each logical pixel (of all or a portion of the image) to a physical pixel on the display and sets a value for one or more of the luminance, chrominance, and reflectance of the physical pixel based on the expected viewing angle of the viewing surface at which the physical pixel is located (e.g., brightens the pixels for those surfaces that are expected to be oblique to the plane of the user's view and dims or leaves unmodified the pixels for those surfaces that are expected to be orthogonal to the plane of the user's view). The electronic device then renders the logical pixel on the display using the physical pixel. These procedures can make the luminance and color of the image appear more uniform to the user.
In an embodiment, some logical pixel values may remain unmodified (i.e., the logical pixel is rendered onto the physical pixel using the same values specified by the logical pixel), some may modified together (e.g., all of the red luminance (“R”), green luminance (“G”), blue luminance (“B”), and reflectance values are increased or decreased by the same amount to increase or decrease the overall luminance or reflectance), and some may be modified differently from others (e.g., the B value is reduced more than the R or G values in order to prevent a blue-shift of the physical pixel).
In some embodiments, the electronic device maps each logical pixel (of all or a portion of the image) to a physical pixel on the display and sets a value for one or more of the luminance, chrominance, and reflectance of the physical pixel based on the determined current viewing angle of the viewing surface of the display on which the physical pixel is located. In various implementations, the electronic device uses sensors, such as gyroscopic sensors, to detect the angle of the display or a camera (e.g., an infrared camera) to track the user's eyes or gaze when looking at the screen. The electronic device may, for example, dynamically adjust LUT values for physical pixel location to alter the adjustment as the user moves the device (e.g., moves his or her arm while viewing a smart watch).
In some embodiments, the electronic device may be configured so that the various correction techniques described herein could be adjusted by, and turned on or off by a user. In some embodiments, the device itself may initiate one or more of these correction techniques. For example, when the device shows certain content (e.g., a movie) the device could automatically make corrections, and could subsequently turn the corrections off for other content.
Turning to
The device 100 includes a display 102. In one embodiment the device 100 is a smart watch and the display 102 wraps around the user's wrist when the device 100 is worn. Thus, when a user looks at the device 100 in a typical fashion, different portions of the display 102 are (and are perceived to be) at different angles with respect to the user's line of sight than other portions. For example, a first region 104 of the display 102 is at a first angle with respect to the user's line of sight 106, a second region 108 is at a second angle with respect to the user's line of sight 106, and a third region 110 is at a third angle with respect to the user's line of sight 106.
The display 102 is organized into physical pixels including a first physical pixel set 112 in the first region 104, a second physical pixel set 114 in the second region 108, and a third physical pixel set 116 in the third region 110. Each set of pixels may contain multiple pixels or a single pixel. As discussed below in further detail, the device 100 maps logical pixels of an image onto the physical pixels.
Turning to
In some embodiments, the device 100 uses orientation data from the gyroscopic sensor 206 to alter the mapping of logical pixels to physical pixels. For example, the device 100 may modify the data structure 210 based on the angle at which the device 100 is oriented in order compensate for perspective based on the user's angle of view. In other embodiments, the device 100 uses data from the camera 208 to alter the mapping of logical pixels to physical pixels. For example, the camera 208 may indicate where the user is looking and the device 100 may modify the data structure 210 based on the direction of the user's gaze.
The device 100 may include other components that are not depicted, such as wireless networking hardware (e.g., a WiFi chipset or a cellular baseband chipset), through which the device 100 communicates with other devices over networks such as WiFi networks or cellular networks or short range communication hardware (e.g., a Bluetooth® chipset), through which the device 100 communicates with a companion device (e.g., the device 100 is a smart watch and communicates with a paired cell phone). The elements of
The processor 202 retrieves instructions from the memory 204 and operates according to those instructions to carry out various functions, including the methods described herein. Thus, when this disclosure refers to the device 100 carrying out an action, it is, in many embodiments, the processor 202 that actually carries out the action (in coordination with other pieces of hardware of the device 100 as necessary).
Turning to
Also note that the data structure 210 does not necessarily replace the typical LUT that are commonly used by graphics processing systems. In fact, both the data structure 210 and the typical LUT could be combined into a larger, common LUT indexed by screen position and pixel value.
Turning to
Turning to
Turning to
In view of the many possible embodiments to which the principles of the present discussion may be applied, it should be recognized that the embodiments described herein with respect to the drawing figures are meant to be illustrative only and should not be taken as limiting the scope of the claims. Furthermore, it should be understood that the procedures set forth in the process flow diagrams may be reordered or expanded without departing from the scope of the claims. For example, blocks 602 and 604 of