Users sometimes employ computers to generate or “render” computer graphics (“images”) that range between photorealism and non-photorealism. Photorealistic images provide accurate visual depictions of objects—whether real or not—whereas non-photorealistic images appear to be hand drawn or are otherwise fanciful or artistic. Most maps are types of images that are generally two-dimensional, geometrically accurate representations of a three-dimensional space. Aerial images could be construed as a kind of map, and a graphics system that generates imagery that looks like them would be considered a photorealistic aerial image synthesizer. Most maps are geometrically accurate yet visually simplified. Such maps provide visual representations of information assembled by cartographers to meaningfully and accurately depict the three-dimensional space in two dimensions. Maps may depict various features of the three-dimensional space, such as roads, water bodies, and buildings. Finally, non-photorealistic maps can be more stylized and may use non-literal symbolism. As an example, non-photorealistic maps with whimsical or artistic renderings of map features are sometimes provided to tourists by tour operators. These maps may not be to scale and may depict features artistically. Maps, like other images, can thus span the range between photorealistic and non-photorealistic.
Various techniques exist for creating non-photorealistic images using computers. These techniques generally render objects based on geometric parameters that define the objects being rendered. As an example, these techniques may determine that a line appearing between a large water body and a landmass defines a coastline. When a rendering algorithm encounters such a coastline, it may render the coastline in a darker shade than other lines appearing in the map. However, when a line defining a mountain range appears between a water body and a landmass, such a rendering algorithm may not correctly depict the mountain range and may incorrectly depict the line as a coastline. Furthermore, the rendering algorithm may not employ a suitable algorithm for rendering a feature artistically that is based on input other than an object's geometric parameters.
A facility is described for synthesizing images in the various ways during non-photorealistic rendering of vector representations of image features, such that the features in the image are drawn differently based on semantic labels attached to data that defines the features. In various embodiments, the facility utilizes transformation and rendering algorithms to generate non-photorealistic images in which features are transformed or rendered based on associated “labels” indicated in inputs corresponding to the features, such as inputs in a data file. A label indicates the type of object, such as a map's feature. As an example, whereas a line or a spline may provide the geometric characteristics of a street, river, or other linear feature of a map, an associated label can indicate that the feature is in fact a street or a river. When the feature is so labeled, the facility utilizes a transformation or rendering algorithm appropriate for the label. Thus, the facility is able to generate semantically guided non-photorealistic images, such as maps containing artistic effects, by considering labels associated with objects.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
A facility is described for synthesizing semantically-guided, non-photorealistic images of vector representations of image features. In various embodiments, the facility utilizes transformation and rendering algorithms to generate non-photorealistic images in which features are transformed or rendered based on associated “labels” indicated in inputs corresponding to the features. A label indicates the type of object, such as a map's feature. As an example, whereas a line or a spline may provide the geometric characteristics of a street, river, or other linear feature of a map, an associated label can indicate that the feature is a street or a river. When the feature is so labeled, the facility utilizes a transformation or rendering algorithm appropriate for the label. As an example, when a label of a data file identifies a line as a street, the facility may use a transformation algorithm applicable to streets. In contrast, when the label identifies the line as a river, the facility may use a transformation algorithm applicable to rivers. This is known as “semantics-guided transformation.” These transformation algorithms may also render the features by simultaneously applying an artistic effect. As an example, the transformation algorithms may render objects in a woodcut-like manner. Thus, the facility is able to generate semantics-guided non-photorealistic images, such as maps containing artistic effects, by considering labels associated with objects.
In various embodiments, the facility may receive indications of various options, such as a style for transformations. Examples of styles include, but are not limited to, woodcuts, animations, town plans, etc. The facility renders transformed images according to these options. The facility then combines the rendered features into an image. As an example, the facility may use matting or overlaying techniques to combine the rendered features. In various embodiments, the facility uses procedural techniques with stochastic elements to render features. In some embodiments, such procedural techniques can specify a feature algorithmically, e.g., instead of providing a bitmap. In various embodiments, the facility may also use bitmaps or other graphics techniques. The facility can use stochastic techniques to introduce a randomness factor when rendering an image.
In various embodiments, the facility receives as input a vector representation of an image, such as a map. The vector representation indicates geometric objects with corresponding labels. Each geometric object defines a feature, such as a tree, house, street, river, mountains, lake, etc. The features can be defined by geometric shapes such as points, lines, splines, polygons, areas, volumes, etc. The facility processes this input to create an image.
The facility thus enables rendering of images with artistic or other useful features, such as by employing vector features that are labeled with semantics-related information.
As used herein, transformation means converting a set of inputs, such as a definition of objects in a data file, into a representation that can be rendered on a screen. Transformation further includes geometrically or otherwise manipulating the representation, such as to add an artistic effect.
Turning now to the figures,
The facility is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the facility include, but are not limited to, personal computers, server computers, handheld or laptop devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The facility may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, and so forth that perform particular tasks or implement particular abstract data types. The facility may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in local and/or remote computer storage media including memory storage devices.
With reference to
The computer 111 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by the computer 111 and include both volatile and nonvolatile media and removable and nonremovable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communications media. Computer storage media include volatile and nonvolatile and removable and nonremovable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Computer storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer 111. Communications media typically embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and include any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communications media include wired media, such as a wired network or direct-wired connection, and wireless media, such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
The system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132. A basic input/output system (BIOS) 133, containing the basic routines that help to transfer information between elements within the computer 111, such as during start-up, is typically stored in ROM 131. RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by the processing unit 120. By way of example, and not limitation,
The computer 111 may also include other removable/nonremovable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media, discussed above and illustrated in
The computer 111 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180. The remote computer 180 may be a personal computer, a server, a router, a network PC, a peer device, or other common network node, and typically includes many or all of the elements described above relative to the computer 111, although only a memory storage device 181 has been illustrated in
When used in a LAN networking environment, the computer 111 is connected to the LAN 171 through a network interface or adapter 170. When used in a WAN networking environment, the computer 111 typically includes a modem 172 or other means for establishing communications over the WAN 173, such as the Internet. The modem 172, which may be internal or external, may be connected to the system bus 121 via the user input interface 160 or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 111, or portions thereof, may be stored in the remote memory storage device 181. By way of example, and not limitation,
While various functionalities and data are shown in
The techniques may be described in the general context of computer-executable instructions, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments.
This example indicates that a river is defined using the X,Y coordinates of (120,30), (25,150), and (290,150). A line segment from each of these coordinates to the next coordinate defines the river.
In various embodiments, the facility may receive the label information from a source outside the data file. As an example, a user may manually indicate feature types. Alternatively, a second data file may provide a correspondence between objects and feature types. The data files can provide objects in various ways, including as vector representations.
A rendering application 204 receives and processes input, such as from a data file, to output a set of one or more graphics layers 206. The rendering application has a rendering object 208. The rendering object processes objects defined by the data file. This rendering object invokes draw_objects and render_object routines to transform and render each object. These routines are described in further detail below in relation to
The rendering application may load various layer generator objects, such as from a dynamic link library (“DLL”) corresponding to the style in which the object is to be rendered. The illustrated embodiment functions with maps. Accordingly, the rendering application is shown as having loaded generators for map features, including a land-layer generator 210, ocean-layer generator 212, and river-layer generator 214. For each object in the data file, the rendering object determines which of the generator objects is to render the object. In various embodiments, each of the generator objects may provide one or more transformation functions corresponding to a particular feature. As an example, the land-layer generator object may provide a transformation function for mountain ranges and another transformation function for coastlines. The facility may function with multiple generator objects, such as a set of generator objects for the woodcut style and another set of generator objects for the animation style.
The rendering application may load multiple sets of generator objects. As an example, the facility may have sets of generator objects that each provide a different style, such as woodcut, animation, and so forth.
The transformation functions each add one or more layers to graphics layers 206 when they transform and render an object. These graphics layers combine to produce an image representing the objects. The transformation functions may be associated with various types of objects and provide: point features such as trees, houses, etc.; linear features such as streets, rivers, etc.; area features such as lakes, land masses, etc.; and volumetric features such as buildings, volcanoes, etc.
In various embodiments, the transformation functions are “procedural,” in that an algorithm is used to render an object instead of transforming an existing bitmap. In other embodiments, the transformation functions may transform images or bitmaps to render objects. Transformation functions transform vector representation into graphical form, and as such may involve parameters that adjust color, geometry, drawing style, degree of blur, and other visual components of the rendered image. In yet other embodiments, the transformation functions may use hybrid approaches.
In various embodiments, the facility may use various additional properties to further manipulate rendered images. As an example, the facility may receive an indication of a time of day or day of year from a user and render scenes appropriately. As an example, shadows may appear on an appropriate side and backgrounds may be appropriately colored.
Between blocks 304 and 314, the routine processes each object in the set of objects. At block 304, the routine selects an object from the received set of objects.
At block 306, the routine determines whether the selected object has a label. A label indicates the type of an object, such as a map's feature. When the object has a label, the facility is able to invoke a routine to render the indicated type of object, such as a routine that performs a transformation based on the object's type. When the object has a label, the routine continues at block 310. Otherwise, the routine continues at block 312 to render the object without any transformation that is specific to the type of object.
At block 310, the routine invokes a render_object subroutine to render the selected object. The render_object subroutine is described in further detail below in relation to
At block 312, the routine renders the selected object. When the routine renders the selected object, the routine may add a bitmapped image to the graphics layers 206 corresponding to the selected object. As an example, the routine may draw a tree or other shape for a point feature that is labeled as “tree”.
At block 314, the routine selects another object that has not yet been processed. When all objects have been processed, the routine continues at block 316, where it returns. Otherwise, the routine continues at block 306.
In various embodiments, the facility performs further geometric transformations to the rendered objects, such as to add perspective effects. In various embodiments, this geometric transformation is performed by the transformation functions.
At block 404, the routine selects a transformation function based on the object's label and the rendering information. As an example, if the label indicates that the object is a river, the routine selects a river transformation function provided by a river-layer generator. If the label indicates that the object is a mountain range, the routine selects a mountain range transformation function provided by the land-layer generator. Generator objects can provide multiple transformation functions. As an example, the land-layer generator object may provide transformation functions for coastlines, mountain ranges, and other land-related rendering transformations. The routine selects a set of generator objects based on the received rendering information. As an example, the routine selects a set of generator objects that provide a woodcut style when the rendering information indicates the woodcut style. The generator objects may have additional parameters which may be adjusted automatically or manually by the user to account for other effects, e.g., perspective effects or color effects.
At block 406, the routine invokes the selected transformation function. As an example, the routine may invoke the mountain range transformation function of the land-layer generator that provides the woodcut style. The routine provides an indication of the object to the transformation function. As an example, the routine may provide the control points and other information associated with the object that is to be rendered. The transformation function renders the object and adds the rendered object to the graphics layers.
At block 408, the routine returns.
In various embodiments, the facility may perform various artistic projections, such as to shift a tree that occludes a more important feature, such as a house.
In various embodiments, the facility enables a user to select styles for various objects manually. As an example, when drawing a city map using an artistic rendering style, the user may specify that a landmark building is to be rendered in a historic style whereas a newer building is to be rendered in a more modern style. The facility could then use the appropriate transformation functions.
In various embodiments, a user can select a color choice, add features that do not appear in the data file, zoom to various levels, and so forth. As an example, a user may be able to identify and add a particular location on a map, such as the user's house or office. The facility could then additionally render the user's input using the same or different style as the style used for the image or map.
In various embodiments, a user can prioritize features to illustrate, such as when multiple features occupy the same or adjacent spaces. In a further refinement, the user may be able to indicate that only streets between two locations are to be displayed, such as from the nearest freeway to the user's house.
In various embodiments, a user can specify a property relating to detail. As an example, the facility may render a small number of trees to represent a forest or may render a small number of buildings to represent a settlement.
In various embodiments, the facility can output images in various known formats, such as JPEG, vector images, or any electronic graphics representation.
Those skilled in the art will appreciate that the steps shown in FIGS. 34 and discussed above may be altered in various ways. For example, the order of the steps may be rearranged, substeps may be performed in parallel, shown steps may be omitted, other steps may be included, etc.
It will be appreciated by those skilled in the art that the above-described facility may be straightforwardly adapted or extended in various ways. As an example, the facility may iteratively employ multiple transformation functions to provide various results. While the foregoing description makes reference to particular embodiments, the scope of the invention is defined solely by the claims that follow and the elements recited therein.