Existing systems composite multiple layers of raster or vector image content to produce a final image. Typical scenarios include a user zooming or panning an image of a map, where each frame requires the composition of multiple image layers (e.g., up to twenty or more in some examples) having differing resolutions. As shown in
Some other systems improve the frame rate by relying on fast hardware (e.g., dedicated graphics processors and memory) to perform the rendering. However, only computing devices having the specific hardware needed for such systems benefit from these implementations. Additionally, because the rendered frame rate degrades linearly with each additional layer, frame rate performance declines as additional layers are processed even with hardware-accelerated rendering.
Embodiments of the invention enable smooth, continuous image rendering using multiple image layers per frame. A plurality of image layers having different resolutions are received for display. The image layers are arranged in order of increasing resolution. Starting with the image layer having the lowest resolution, the image layer is upsampled to a resolution of a next image layer having a higher resolution. The upsampled image layer is blended with the next image layer. The upsampling and blending continues for each of the image layers to produce a blended image. The blended image is provided for display on the computing device.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Corresponding reference characters indicate corresponding parts throughout the drawings.
In
In contrast, referring to the figures, embodiments of the disclosure provide image 212 rendering using intermediate resolutions from multiple image layers 208. In some embodiments, the image layers 208 include vector as well as raster content, in any form of storage (e.g., compressed, video stream, etc.). Each image such as image 212 for a frame of video includes a plurality of the image layers 208 blended together. The plurality of images is arranged in order of increasing resolution. Each of the image layers 208 are upsampled and blended with the image layer 208 having the next higher resolution. However, in contrast to the existing system illustrated in
Aspects of the disclosure provide, at least, high-resolution image rendering without sacrificing the rendering frame rate. A crisp, high resolution, composite image having the proper image data for a given zoom or pan state is provided for each frame of animated navigation across the zoom or pan states. The animated navigation promotes an improved and interactive user experience, in part due to a dramatic reduction in the amount of time spent upsampling. The blending described herein provides continuous transition between the image layers 208 (e.g., spatial blending) and as the image layers 208 become available (e.g., temporal blending of newly obtained image data). Further, the image may be a still image from a motion video, and may include at least one or more of the following: textual data, nontextual data, vector data, and/or nonvector data.
In an example, map imagery may include twenty image layers 208 from the high-level view to a street view. Data from the twenty image layers 208 may at some point be blended together in the composition. While upsampled image layers 208 may become too blurry to convey useful information when scaled up too much in some embodiments, including those image layers 208 in the scene aids in the continuity of user experience.
While embodiments of the disclosure are described herein with reference to continuous zooming, panning, rotating, transforming, navigation, or other animation of an image while changing any view or perspective of the image, embodiments of the disclosure are applicable to any system for multi-layer image processing and/or display. Aspects of the disclosure are not limited to zooming or panning. Further, the functionality within the scope of the disclosure may be embodied in software, hardware, or a combination of both.
Referring next to
The image layers 208 stored in the memory area 206 are accessible by the computing device 202. In the example of
In some embodiments, the image layers 208 are actively obtained or retrieved from a web service 222 or other image source by the computing device 202 via a network 224. The web service 222 includes, for example, a computer providing images of maps. In such an example, a user 203 of the computing device 202 may initiate a zoom or pan request on a displayed image such as image 212. The image 212, stored in the memory area 206, represents a composition of one or more of the image layers 208. In some embodiments, the image 212 represents a still image, and may correspond to a single frame of video.
Responsive to the zoom or pan request, the computing device 202 obtains or retrieves the image layers 208 corresponding to the request. In other embodiments, the image layers 208 are pre-loaded into the memory area 206, passively received by the computing device 202 (e.g., during off-peak periods for network traffic, or during periods of low-cost data traffic), or otherwise stored in the memory area 206 without being directly responsive to a request from the user 203.
The network 224 may be any network including, for example, the Internet or a wireless network such as a mobile data network, Wi-Fi network, or BLUETOOTH network. In embodiments in which the network 224 is a mobile network, the computing device 202 may be a mobile computing device such as a mobile telephone.
The computing device 202 has a processor 204 associated therewith. The processor 204 is programmed to execute computer-executable instructions for implementing aspects of the disclosure. In some embodiments, the processor 204 is programmed to execute instructions such as those illustrated in the figures (e.g.,
The memory area 206 or other computer-readable media further stores computer-executable components including an interface component 214, a sort component 216, a composition component 218, and a buffer component 220. These components are described below with reference to
Referring next to
In the example of
The resolutions of the ordered image layers 208 such as those shown in
1. If I>1, render A_(I−1) to A_I
2. Render L_I into A_I with blend factor O_I
Once A_N is produced, it is rendered into A. The cost of this approach is shown in Equation (1) below.
Sum_(I)(C(A_(I−1),1,A_I)+C(L_I,O_I,A_(I+1))) (1)
If C(L, O, A) is the cost of blending layer one to surface A with blend factor O, the cost of the prior art approach in
Sum(I)(C(L_I,O_I,A)) (2)
Accordingly, when L_I are represented by images whose sizes double with each I, the difference between the cost from Equation (1) and the cost from Equation (2) is the difference between C×A, where C depends on the geometric series quotient (e.g., 1.333 when the quotient is two) and N×A. As such, embodiments of the disclosure enable image processing at a cost that is bounded by a constant factor times the quantity of pixels at the maximum resolution of the image layers 208 for a frame. Accordingly, the amount of time spent upsampling is reduced dramatically over previous solutions, thereby improving the user experience.
Referring next to
The image layers 208 are ordered according to increasing resolution or size. The ordering may occur at the computing device 202, or the image layers 208 may be ordered prior to receipt by the computing device 202. For each of the image layers 208 at 404 starting with a first one of the image layers 208 having a lowest resolution, the image layer 208 is upsampled at 406 to the resolution associated with a next one of the ordered image layers 208 having the next higher resolution. The upsampled image layer 208 is blended with that next image layer 208 at 408. For example, the blending at 408 occurs based on the opacity factor 210 associated with that next image layer 208.
Processing continues at 410 at the next image layer 208 using the blended image layer 208. When the image layers 208 for a particular frame or image have been processed, the resulting blended image is provided for display on the computing device 202 at 412. The resulting blended image may be stored in a graphics buffer for access by a graphics card, provided as a video frame, transmitted to a display device for display, displayed to the user 203, or otherwise conveyed to the user 203.
In general, with reference to the components illustrated in
In some embodiments, the composition component 218 is implemented as software for execution by one of the main processors in the computing device 202. In other embodiments, the composition component 218 is implemented as logic for execution by a dedicated graphics processor or co-processor.
Referring next to
In the example of
Referring next to
The image layers 208 associated with each of the frames are arranged in order of increasing resolution. Each of the image layers 208 are upsampled and blended at incrementally higher resolutions at 608 to produce a blended image, such as illustrated in
Exemplary Operating Environment
While aspects of the invention are described with reference to the computing device 202, embodiments of the invention are operable with any computing device. For example, aspects of the invention are operable with devices such as laptop computers, gaming consoles (including handheld gaming consoles), hand-held or vehicle-mounted navigation devices, portable music players, a personal digital assistant, an information appliance, a personal communicator, a handheld television, or any other type of electronic device.
By way of example and not limitation, computer readable media comprise computer storage media and communication media. Computer storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embody computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and include any information delivery media. Combinations of any of the above are also included within the scope of computer readable media.
Although described in connection with an exemplary computing system environment, embodiments of the invention are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with aspects of the invention include, but are not limited to, mobile computing devices, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, gaming consoles, microprocessor-based systems, set top boxes, programmable consumer electronics, mobile telephones, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Embodiments of the invention may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. The computer-executable instructions may be organized into one or more computer-executable components or modules. Generally, program modules include, but are not limited to, routines, programs, objects, components, and data structures that perform particular tasks or implement particular abstract data types. Aspects of the invention may be implemented with any number and organization of such components or modules. For example, aspects of the invention are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other embodiments of the invention may include different computer-executable instructions or components having more or less functionality than illustrated and described herein.
Aspects of the invention transform a general-purpose computer into a special-purpose computing device when configured to execute the instructions described herein.
The embodiments illustrated and described herein as well as embodiments not specifically described herein but within the scope of aspects of the invention constitute exemplary means for producing a composite image by iteratively upsampling and blending each of the plurality of image layers 208 at the incremental resolutions of each of the image layers 208, and exemplary means for optimizing a quantity of pixels adjusting during said upsampling and said blending by upsampling each of the image layers 208 only to the resolution of one of the image layers 208 having a next higher resolution.
The order of execution or performance of the operations in embodiments of the invention illustrated and described herein is not essential, unless otherwise specified. That is, the operations may be performed in any order, unless otherwise specified, and embodiments of the invention may include additional or fewer operations than those disclosed herein. For example, it is contemplated that executing or performing a particular operation before, contemporaneously with, or after another operation is within the scope of aspects of the invention.
When introducing elements of aspects of the invention or the embodiments thereof, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements.
Having described aspects of the invention in detail, it will be apparent that modifications and variations are possible without departing from the scope of aspects of the invention as defined in the appended claims. As various changes could be made in the above constructions, products, and methods without departing from the scope of aspects of the invention, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
Number | Name | Date | Kind |
---|---|---|---|
6239807 | Bossut | May 2001 | B1 |
6747649 | Sanz-Pastor et al. | Jun 2004 | B1 |
6774898 | Katayama et al. | Aug 2004 | B1 |
7002602 | MacInnis et al. | Feb 2006 | B2 |
7075535 | Aguera y Arcas | Jul 2006 | B2 |
7102652 | O'Donnell et al. | Sep 2006 | B2 |
7110137 | Burgess et al. | Sep 2006 | B2 |
20020021758 | Chui | Feb 2002 | A1 |
20060038823 | Arcas | Feb 2006 | A1 |
20060250415 | Stevenson | Nov 2006 | A1 |
20070002071 | Hoppe et al. | Jan 2007 | A1 |
20070252834 | Fay | Nov 2007 | A1 |
20080024390 | Baker et al. | Jan 2008 | A1 |
20080028335 | Rohrabaugh et al. | Jan 2008 | A1 |
Entry |
---|
“NASA Great Zooms: A Case Study”, retrieved at <<http://svs.gsfc.nasa.gov/stories/zooms/zoompg2.html>>, Oct. 20, 2008, pp. 3. |
El Santo, “Trilinear Mipmap Interpolation”, Retrieved at <<http://everything2.com/title/Trilinear+mipmap+interpolation, Retrieved on Jan. 31, 2012, pp. 1. |
Shirah, et al., “NASA Great Zooms: A Case Study”, In IEEE Visualization, Oct. 27, 2002, pp. 4. |
Number | Date | Country | |
---|---|---|---|
20100171759 A1 | Jul 2010 | US |