The present invention relates generally to digital image composition, and particularly to composing a digital image to provide for the perceptibility of the image as viewed on a substantially transparent screen.
Advances in display technology have greatly enhanced the accessibility of digital information. Heads-up displays (HUDs), for example, are becoming more prominent display accessories for military and commercial aviation, automobiles, gaming, and the like. HUDs display a digital image on a transparent screen placed in front of a user. From the perspective of the user, then, HUDs superimpose the digital image onto whatever is behind the screen. This allows the user to more quickly, more easily, and more safely view the image without looking away from his or her desired viewpoint. For instance, with such technology a driver of an automobile can view navigational instructions or speed information without taking his or her eyes off the road, a fighter pilot can view target information or weapon status information without taking his or her eyes off of the target, and so on. And although for perhaps less practical advantages than these, some computer laptops, mobile communication devices, and other such mobile devices are now equipped with transparent screens as well.
The ability of a transparent screen to conveniently superimpose a digital image onto whatever is behind the screen is thus an advantage of such a screen. However, that advantage also creates a practical challenge. Indeed, depending on exactly what is behind the screen, all or part of the digital image may sometimes be difficult for a user to perceive. Consider, for example, a digital image that includes green text. If a patch of green trees is behind the transparent screen, the green text will be much more difficult for the user to perceive than if instead a patch of purple flowers had been behind the screen.
Of course in many cases a user cannot practically change the position or orientation of the transparent screen so that whatever is behind the screen provides better perceptibility of a digital image. In the case of an automobile heads-up display, for instance, such would require changing the direction of the entire automobile. Moreover, even in those cases where it may indeed be practical, there may not be anything in the vicinity of the user that would provide better perceptibility (e.g., there may not be a patch of purple flowers around).
Teachings herein prepare a digital image for display on a substantially transparent screen. The teachings advantageously recognize that the perceptibility of the digital image on the screen will often depend on what is visible to a user through the screen, since that will effectively serve as the background of the screen. In a general sense, then, the methods and apparatus determine the effective background of the transparent screen and then compose the digital image so that the image will be perceptible against that background.
More particularly, in various embodiments discussed below, a method of preparing a digital image includes receiving environmental background data relating to an environmental background which is visible, at least in part, to a user through the screen. The method further includes dynamically calculating, based on that environmental background data, which part of the environmental background is visible to the user through the screen and thereby serves as an effective background of the screen. For example, in some embodiments the environmental background data comprises an image of the environmental background, such that dynamic calculation entails identifying which part of that image serves as the effective background of the screen. Having calculated the effective background of the screen, the method next includes composing the digital image for perceptibility as viewed against that effective background and outputting the composed digital image as digital data for display on the screen.
In composing the digital image for perceptibility, some embodiments recognize the digital image as consisting of one or more logical objects (e.g., buttons of a user interface) that may be spatially arranged and/or colored in different possible ways without substantially affecting the meaning conveyed by the image. Exploiting this property, these embodiments compose the digital image from one or more logical objects that have a spatial arrangement or coloration determined in dependence on evaluation of the effective background. For example, the embodiments may selects certain colors for different logical objects in the digital image and/or arrange those objects within the image so that they are perceptible as viewed against the effective background.
An image processor configured to prepare a digital image as described above includes a communications interface, an effective background calculator, and an image composer. The communications interface is configured to receive the environmental background data, while the effective background calculator is configured to dynamically calculate the effective background based on that environmental background data. The image composer is then configured to compose the digital image for perceptibility as viewed against that effective background and to output the digital image for display on the screen.
The image processor may be communicatively coupled to a memory, one or more detectors, and the transparent screen. The one or more detectors are configured to assist the image processor with this dynamic calculation and composition, by providing the image processor with the environmental background data. In some embodiments, for example, the one or more detectors include a rear camera mounted on or near the screen that directly captures an image of the environmental background and provides that rear image to the image processor. Having obtained this rear image from the detector(s), the image processor may then dynamically calculate which part of the rear image serves as the effective background of the screen.
In embodiments where the screen remains fixed relative to the user, the image processor may calculate this part of the rear image as simply a fixed or pre-determined part of the rear image (e.g., by implementing a pre-determined cropping of the rear image). In other embodiments, though, such as where a user may view the screen at any number of different angles, the image processor may calculate the part of the rear image that serves as the effective background based on the user's actual viewing angle. In particular, the one or more detectors mentioned above may further include a front camera that captures an image of the user and provides that front image to the image processor. The image processor then calculates the user's viewing angle by detecting the location of the user's face or eyes in the front image (or a processed version thereof). The image processor may then dynamically calculate which part of the rear image serves as the effective background of the screen based on the viewing angle determined from the front image.
Of course, the present invention is not limited by the above features and advantages. Those of ordinary skill in the art will appreciate additional features and advantages upon reading the following detailed description of example embodiments, and reviewing the figures included therein.
The transparent screen 22 in some embodiments is integrated into the device 10 as a dedicated display for the device 10. In other embodiments, the transparent screen 22 is external to the device 10, but may be communicatively coupled to the device 10 as a display accessory. In either case, whatever the screen 22 is physically disposed in front of is generally referred to herein as the environmental background. In one sense, then, the environmental background includes the various objects, surfaces, and the like that collectively form the general scenery behind the screen 22.
As the screen 22 is substantially transparent, at least part of this environmental background will be visible to a user of the device 10 through the screen 22. Which particular part of the environmental background will be visible may in some cases depend on several factors, such as the dimensions of the screen 22, the position and orientation of the screen 22 relative to the user, and so on. Whatever part is visible, though, will effectively serve as the background of the screen 22 and will thus have an effect on the perceptibility of any image displayed on the screen 22.
In this regard, the image processor 12 is advantageously configured to prepare a digital image 24 for display on the transparent screen 22. As shown, the image processor 12 includes a communications interface 12A configured to receive environmental background data 15 relating to the environmental background. The image processor 12 further includes an effective background calculator 12B configured to dynamically calculate, based on the environmental background data 15, which part of the environmental background is visible to the user through the screen 22 and thereby serves as the effective background of the screen 22. An image composer 12C also included in the image processor 12 is then configured to compose the digital image 24 for perceptibility as viewed against that effective background (e.g., in accordance with digital image data 13 stored in memory 14). Such composition may entail selecting certain colors for different logical objects in the digital image 24 and/or arranging those objects within the image 24 so that they are perceptible as viewed against the effective background. These and other approaches to composition of the digital image 24 are discussed in more detail below.
With the image 24 composed for perceptibility, the image composer 12C is configured to output the composed image 24 as digital data for display on the screen 22. In particular reference to
The one or more detectors 16 are configured to assist the image processor 12 with this dynamic calculation and composition, by directly or indirectly providing the image processor 12 with environmental background data 15. In some embodiments, for example, the one or more detectors 16 include a rear camera mounted on or near the screen 22 that captures an image of the environmental background and provides that rear image to the image processor 12. Having received this rear image from the detector(s) 16, the image processor 12 may then dynamically calculate which part of the rear image serves as the effective background of the screen 22.
Consider, for example,
With the rear camera 16 mounted to the HUD system 26 and rotating left and right with the orientation of the user's head, the camera 16 dynamically captures a rear image of the environmental background 32.
In
In more detail, the image processor 12 may first determine point 44 as the calibrated center point 44 of the rear image 40. That is, in embodiments where the rear camera 16 is physically offset from the geometric center of the screen 22, the actual center point 46 of the rear image 40 does not correspond to the central point of the user's viewpoint through the screen 22. In
As suggested above, the image processor 12 calculates the particular dimensions of area 42 based on the dimensions of the screen 22, the dimensions of the rear image 40, the field of view of the rear camera 16, and the distance between the user and the screen 22. In particular, the image processor 12 calculates the length I along one side of area 42 (e.g., in pixels) according to the following:
where s is the length along a corresponding side of the screen 22, L is the length along a corresponding side of the rear image 40 (e.g., in pixels), α is the field of view of the rear camera 16, and d is the distance between the user 30 and the screen 22 (which may be pre-determined according to the typical distance between a user and the particular type of screen 22).
Of course, many or all of these values may in fact be fixed for a given device 10 and/or HUD system 26. The rear camera 16, for example, may remain fixed at a given distance above the center of the screen 22. Likewise, the dimensions of the screen 22 may be fixed, as may the dimensions of the rear image 40, the field of view of the rear camera 16, and the distance between the screen 22 and the user 30. Moreover, the user's head and eyes remain fixed relative to the screen 22, as the HUD system 26 remains fixed to the user 30. Accordingly, the image processor 12 in some embodiments is configured to derive the area 42 as simply a fixed or pre-determined part of the rear image 40 (e.g., by implementing a pre-determined cropping of the rear image 40).
Notice in
Returning back to the example of
In
To compose the digital image 24 in this way, the image processor 12 may conceptually “subdivide” the effective background (e.g., area 42) into different regions and then determine, for each region, the extent to which the region contrasts with one or more different colors, and/or the color variance in the region. Such relationships between different colors, i.e., whether or not a certain color contrasts well with another color, may be stored as a look-up table in memory 14 or computed by the image processor 12 on the fly. The image processor 12 may then place logical objects within the digital image 24 based on this determination, so that any given logical object will be displayed against a region of effective background which has higher contrast with one or more colors of the logical object than another region and/or lower color variance than another region.
Of course, the image processor 12 may quantify these values for determining the particular placement of a logical object like the green YES button. The image processor 12 may, for instance, quantify the extent to which regions of the effective background contrast with one or more colors in terms of contrast metrics, and compare the contrast metrics to determine the region which has the highest contrast with those color(s). Similarly, the image processor 12 may quantify the color variance in the regions of the effective background as a variance metric, and compare the variance metrics to determine the region which has the lowest color variance. Finally, the image processor 12 may quantify the extent to which a region contrasts with one or more colors and the color variance in that region as a joint metric. Such a joint metric may be based upon, for example, a weighted combination of one or more contrast metrics for the region and a variance metric for the region. The image processor 12 may then compare the joint metrics to determine the region that offers the best perceptibility as indicated by the joint metric for that region.
The image processor 12 may also take other considerations into account when placing a logical object like the green YES button, such as the placement of other logical objects, e.g., the red NO button. In this regard, the image processor 12 may be configured to jointly place multiple logical objects within the digital image 24, to provide for perceptibility of the image 24 as a whole rather than for any one logical object.
In other embodiments, the image processor 12 may not place logical objects within the digital image 24 based on evaluation of the effective background. Rather, in these embodiments, the logical objects' placement is set in some other way, and the image processor 12 instead selects color(s) for the objects based on evaluation of the effective background. Thus, for any given logical object otherwise placed, the image processor 12 selects one or more colors for the object that have higher contrast with a region of the effective background against which the logical object will be displayed than other possible colors.
In
Those skilled in the art will of course appreciate that
For example, in
FIGS. 5 and 6A-6G illustrate still other embodiments. In these embodiments, the transparent screen 22 does not move with the orientation of a user's head so as to remain fixed relative to the user, as in
The device 10 also includes a rear camera 16B on a rear face 10B of the device 10, for capturing a rear image of the environmental background much in the same way as discussed above. Having also received this rear image as environmental background data 15, the image processor 12 dynamically calculates which part of the rear image serves as the effective background of the screen 22 based on the viewing angle determined from the front image.
To assist the image processor 12 determine the viewing angle, the front camera 16A is configured to capture a front image that includes the user.
In some embodiments, the image processor 12 is configured to determine the viewing angle from this front image 90 by first calibrating the center point 96 of the image 90. That is, as the front camera 16A of the device 10 is mounted above the center of the screen 22, the image processor 12 calibrates the actual center point 96 of the front image 90 by displacing it vertically downward to compensate for that offset. The image processor 12 may then digitally flip the front image 90 about a vertical axis 92 extending from the resulting calibrated center point 94, to obtain a horizontally flipped (i.e., horizontally mirrored) version of the front image 90A as shown in
Notice that because the front camera 16A was horizontally centered above the center of the screen 22 in this example, the image processor 12 need not have calibrated the center point 96 of the front image 90 before horizontally flipping the image 90 about the vertical axis 92. Indeed, the vertical axis 92 remained the same both before and after calibration. In embodiments where the front camera 16A is not horizontally centered, though, the vertical axis 92 would shift with the displacement of the center point 96, meaning that calibration should be done prior to horizontal flipping.
Of course, in other embodiments, the image processor 12 calculates the viewing angle A without digitally flipping the front image 90, which involves somewhat intensive image processing. In these embodiments, the image processor 12 instead calculates the viewing angle A directly from the front image 90 (i.e., the un-flipped version shown in
In any event,
Specifically, the processor 12 determines the location in the rear image 70 that would correspond to the location of the user's face or eyes in the flipped version of the front image 90A, as transposed across the calibrated center point 74 at the viewing angle A. This may entail, for example, determining the location as the point that is offset from the effective center point 74 of the rear image 70 by the same amount and at the vertically opposite angle A as the user's face or eyes is from the effective center point 94 of the flipped version of the front image 90A.
Having determined this location in the rear image 70, the image processor 12 then derives the area 72 around that location as being the part of the rear image 70 that serves as the effective background of the screen 22. Similar to embodiments discussed above, the processor 12 derives this area 72 based on the dimensions of the screen 22, the dimensions of the rear image 70, the field of view of the rear camera 16B, and the distance between the user and the screen 22. Unlike the previous embodiments, though, because the user's head and eyes do not remain fixed relative to the screen 22, the image processor 12 may not derive the area 72 as simply a fixed or pre-determined part of the rear image 70; indeed, the size and location of area 72 within the rear image 70 may vary depending on the user's viewing angle and/or the distance between the user and the screen 22.
Consider, for instance,
Regardless of the particular location of the effective background within the rear image, though, the image processor 12 composes the digital image 24 for perceptibility as viewed against that effective background in the same way as discussed above with respect to
Of course, the image processor 12 may alternatively compose the digital image 24 for perceptibility as viewed against the effective background in other ways. The image processor 12 may for instance compose the digital image 24 to in a sense equalize the color intensities of the effective background, and thereby make the digital image 24 more perceptible. In this case, the image processor 12 composes parts of the digital image 24 that will display against low color intensities of the effective background with higher color intensities, and vice versa. Such may be done for each color component of the digital image, e.g., red, green, and blue, and for parts of the digital image 24 at any level of granularity, e.g., per pixel or otherwise.
In other embodiments, the image processor 12 composes the digital image 24 to in a sense adapt the effective background to a homogeneous color. In this case, the image processor 12 determines which color is least present in the effective background and composes the digital image 24 with colors that saturate the effective background toward that color. The image processor 12 may for instance distinguish between the background of the image 24 (e.g., the general surface against which information is displayed) and the foreground of the image 24 (e.g., the information itself), and then compose the background of the image 24 with the color least present in the effective background. The image processor 12 may also compose the foreground of the image 24 with a color that has high contrast to this background color.
Thus, those skilled in the art will again appreciate that the above descriptions merely illustrate non-limiting examples that have been used primarily for explanatory purposes. The transparent screen 22, for instance, has been explained for convenience as being rectangular, but in fact the screen 22 may be of any shape without departing from the scope of the present invention. The screen 22 may also be split into two sections, one perhaps dedicated to the left eye and the other to the right eye. In this case, the two sections may be treated independently as separate screens in certain aspects, e.g., with a dedicated evaluation of the effective background of each, but treated collectively for displaying the composed digital image 24 onto.
Moreover, depending on the particular arrangement of the front camera 16A and the rear camera 16B in those embodiments utilizing both, the image processor 12 may implement still further calibration processing to compensate for any other differences in their arrangement not explicitly discussed above.
Of course, the detector 16 for acquiring information about the environmental background (as opposed to the user's viewing angle) need not be a rear camera at all. In other embodiments, for example, this detector 16 is a chromometer (i.e., a colorimeter) or spectrometer that provides the image processor 12 with a histogram of information about the environmental background. In still other embodiments, the detector 16 is an orientation and position detector that provides the image processor 12 with information about the geographic position and directional orientation of the detector 16. This information may indirectly provide the processor 12 with information about the environmental background. Indeed, in such embodiments, the image processor 12 may be configured to determine or derive image(s) of the environmental background from image(s) previously captured at or near the geographic position indicated.
Those skilled in the art will further appreciate that the various “circuits” described may refer to a combination of analog and digital circuits, and/or one or more processors configured with software and/or firmware (e.g., stored in memory) that, when executed by the one or more processors, perform as described above. One or more of these processors, as well as the other digital hardware, may be included in a single application-specific integrated circuit (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a system-on-a-chip (SoC).
For example, in some embodiments, the image processor 12 retrieves digital image data 13 from the memory 14, which includes executable instructions for generating one or more logical objects of the digital image 24. The instructions may describe a hierarchy of logical objects in term of vector graphics (i.e., geometrical primitives) or raster graphics (i.e., pixel values). In either case, though, the instructions in at least one embodiment describe only one way to generate logical objects of the image 24; that is, the instructions in a sense define a nominal, or default, spatial arrangement and/or coloration of the logical objects that is not based on evaluation of the effective background of the screen 22. Thus, in these embodiments, the image processor 12 is configured to selectively deviate from, or even modify, the retrieved instructions in order to generate the logical Objects with a spatial arrangement and/or coloration that is indeed based on such evaluation, as described above. The particular manner in which the image processor 12 deviates from, or modifies, the instructions may be specified beforehand in pre-determined rules or dynamically on an image-by-image basis. Having deviated from and/or modified those instructions to generate the logical objects, the image processor 12 may then flatten the logical objects to form the digital image 24.
In other embodiments, though, the instructions describe several possible ways to generate logical objects of the image 24, e.g., without substantially affecting the meaning conveyed by the image 24. The instructions may, for example, describe that a button may be placed in either the lower-left corner of the image 24, or the lower-right corner of the image 24, and may be either red, green, or blue. In such embodiments, the image processor 12 is configured to assess the perceptibility of a logical object for each possible way to generate that logical object, based on evaluation of the effective background of the screen 22. The image processor 12 may then select between those possibilities in order to meet some criteria with regard to the image's perceptibility (e.g., maximum perceptibility) and generate the logical object with the selected possibility. Having generated all logical objects of the image 24 in this way, the image processor 12 may again flatten the logical objects to form the digital image 24.
Furthermore, the various embodiments presented herein have been generally described as providing for the perceptibility of a digital image 24 as viewed against the effective background. One should note, though, that the perceptibility provided for is not necessarily tailored to any particular user's perception of color. Rather, the perceptibility provided for is some pre-determined, objective perceptibility provided according to pre-determined thresholds of perceptibility and color relationships.
Those skilled in the art will also appreciate that the device 10 described herein may be any device that includes an image processor 12 configured to prepare a digital image for display on a transparent screen (whether or not the screen is integrated with or external to the device). Thus, the device 10 may be a mobile communication device, such as a cellular telephone, personal data assistant (PDA), or the like. In any event, the device may be configured in some embodiments to prepare a digital image for display on a substantially transparent screen integrated with the device itself, or on an external transparent screen communicatively coupled to the device (e.g., a heads-up display). A heads-up display as used herein includes any transparent display that presents data without requiring the user to look away from his or her usual viewpoint. This includes both head- and helmet-mounted displays that moves with the orientation of the user's head, as well as fixed displays that are attached to some frame (e.g., the frame of a vehicle or aircraft) that does not necessarily move with the orientation of the user's head.
With the above variations and/or modifications in mind, those skilled in the art will appreciate that the image processor 12 described above generally performs the method shown in
Nonetheless, those skilled in the art will recognize that the present invention may be carried out in other ways than those specifically set forth herein without departing from essential characteristics of the invention. The present embodiments are thus to be considered in all respects as illustrative and not restrictive, and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein.