The present disclosure relates to electronic map systems, and more specifically to a mapping system that adjusts map features based on time.
A digital map is generally stored in a map database as a set of raw data corresponding to millions of streets and intersections and other map features to be displayed as part of a map. Map features may include, for example, individual roads, text labels (e.g., map or street labels), areas, text boxes, buildings, points of interest markers, terrain features, bike paths, etc. A digitally rendered map image (referred to as a digital map image) may be rendered in a number of ways. Two such methods of rendering digital map images include a raster image process and a vector image process. In a raster image process, a map image is stored within the map database as sets of rasterized or pixilated images made up of numerous pixel data points, with each pixel data point including properties defining how a particular pixel in an image is to be displayed on an electronic display device.
The vector image process uses what is traditionally called vector image data. Vector image data is typically used in high-resolution and fast-moving imaging systems, such as those associated with gaming systems, and in particular three-dimensional gaming systems. Generally speaking, vector image data (or vector data) includes data that defines specific map features to be displayed as part of an image via an image display device. Each map feature or image element is generally made up or drawn as a set of one or more triangles (of different sizes, shapes, colors, fill patterns, etc.), with each triangle including three vertices interconnected by lines. Thus, for any particular map feature, the map database stores a set of vertex data points, with each set of vertex data points defining a particular vertex of one of the triangles making up the map feature. Generally speaking, each vertex data point includes data pertaining to a two-dimensional or a three-dimensional position of the vertex (in an X, Y or an X, Y, Z coordinate system, for example) and various vertex attributes or display attributes defining properties of the vertex, such as color properties, fill properties, line width properties for lines emanating from the vertex, etc. A set of display attributes may be defined as a style. Accordingly, in the vector image process, map data may be divided into 1) data pertaining to spatial properties of map features that describe the spatial layout of a map feature and 2) style properties that determine how to render the map feature once the spatial properties are provided. During the image rendering process, the vertices and corresponding style properties defined for various map features are provided to and are processed in one or more image shaders which operate in conjunction with a graphics processing unit (GPU), such as a graphics card or a rasterizer, to produce a two-dimensional image on a display screen.
Currently, the raw map data stored in most map databases store map features without a time dependent parameter. In particular, the map features may have fixed style parameters that cannot be adjusted to correspond with time. Thus, for a given map area, map features may not reflect time based changes using existing systems. Current online maps, for example, may show the map in a time-ambivalent state, where neither map feature content nor styling of the map feature content is affected by time of day, season, or year. These time-ambivalent maps may not provide an appropriate context for a user. Map views, for example, may typically be shown in a state of perpetual daylight and summer, and points of interest are static. While these time-ambivalent maps may be useful for general navigation, they may not be appropriate to particular user contexts in which a map viewing focus may be dependent on time based information.
A computer-implemented method for providing map data to a computing device comprising receiving a request for responsive map data, the request including a user time parameter. The method retrieves map data that includes a set of map features representing physical map items wherein each map feature is associated with a style parameter. The style parameter provides information for a map rendering device to render an appearance of the map feature wherein each style parameter is assigned a default style. The method determines a set of feature categories of the set of map features, wherein the set of feature categories includes at least a general road, an area, or a location category of map features. The method assigns a single first modified style to the style parameters of the set of map features belonging to one of the feature categories based on the user time parameter to generate responsive map data. The first modified style includes at least one display attribute different from a display attribute of the default style. The method provides access to the responsive map data, wherein the responsive map data includes the assigned first modified style.
In an embodiment, a second modified style is assigned to some of the set of map features of the one feature category, wherein the first and second modified style are a subset of a total number of styles that are assignable to the one feature category. In an embodiment, the map features are ranked based on the user time parameter, and the first modified style is assigned based on a prominence rating of the first modified style wherein a degree of prominence of the first modified style increases with increasing rank of the map feature for a given time value of the user time parameter.
In an embodiment, each of the default set of map features includes a feature time parameter distinct from a style time parameter, and the map features are ranked by a time difference between the feature time parameter and the user time parameter, wherein map features having a smaller time difference are ranked higher. In an embodiment, the first modified style includes one of an angle or a direction of a display shadow of a map feature, a temperature related shade or color, or an illumination factor.
In another embodiment, a computer device includes a communications network interface, one or more processors, one or more memories coupled to the one or more processors and a display device coupled to the one or more processors. The one or more memories include computer executable instructions that are executed on the processor to receive a request for responsive map data, the request including a user time parameter. The computer executable instructions are executed to retrieve map data that includes a set of map features representing physical map items wherein each map feature is associated with a style parameter, wherein the style parameter provides information for a map rendering device to render an appearance of the map feature, and wherein each style parameter is assigned a default style. The computer executable instructions are executed to determine a set of feature categories of the set of map features, wherein the set of feature categories includes at least a general road, an area, or a location category of map features. The computer executable instructions are executed to assign a single first modified style to the style parameters of the set of map features belonging to one of the feature categories based on the user time parameter to generate responsive map data, wherein the first modified style includes at least one display attribute different from a display attribute of the default style. The computer executable instructions are executed to provide access to the responsive map data, wherein the responsive map data includes the assigned first modified style.
The claimed method and system generally adapts map content based on a user time parameter. An appearance of a map feature may be altered with reference to a time attribute to render maps in a much more dynamic manner. In particular, the appearance of a map feature may be altered by changing a style used to render the map feature, wherein the style includes one or more display attributes that provide information on how the map feature is to be rendered. In an embodiment, the styles may be organized by time so that an appropriate style may be selected and applied to a map feature based on a user time parameter. Generally, adjusting styles of map features based on time may tailor the appearance of map content to provide a more aesthetically pleasing user experience that corresponds with a user's current situation. Moreover, styling the map features based on time may provide additional time-relevant information that may be important to a user's focus (e.g., search context) that would otherwise be unavailable or difficult to ascertain.
Generally,
The map database 12 may store any desired types or kinds of map data including raster image map data (e.g., bitmaps) and vector image map data. However, the image rendering systems described herein may be best suited for use with vector image data which defines or includes a series of vertices or vertex data points for each of numerous sets of image objects, elements or primitives within an image to be displayed. Generally speaking, each of the image objects defined by the vector data will have a plurality of vertices associated therewith and these vertices will be used to display a map related image object to a user via one or more of the client devices 16-22. As will also be understood, each of the client devices 16-22 includes an image rendering engine having one or more processors 30, one or more memories 32, a display device 34, and in many cases a rasterizer or graphics card 36 which are generally programmed and interconnected in known manners to implement or to render graphics (images) on the associated display device 34. The display device 34 for any particular client device 16-22 may be any type of electronic display device such as a liquid crystal display (LCD), a light emitting diode (LED) display, a plasma display, a cathode ray tube (CRT) display, or any other type of known or suitable electronic display.
Generally, speaking, the map-related imaging system 10 of
Referring now to
During operation, the map logic of the map application 48 executes on the processor 30 to determine the particular image data needed for display to a user via the display device 34 using, for example, user input, GPS signals, prestored logic or programming, etc. The display or map logic of the application 48 interacts with the map database 12, using the communications routine 43, by communicating with the server 14 through the network interface 42 to obtain map data, preferably in the form of vector data or compressed vector data from the map database 12. This vector data is returned via the network interface 42 and may be decompressed and stored in the data memory 49 by the routine 43. In particular, the data downloaded from the map database 12 may be a compact, structured, or otherwise optimized version of the ultimate vector data to be used, and the map application 48 may operate to transform the downloaded vector data into specific vertex data points using the processor 30a. In one embodiment, the image data sent from the server 14 includes vector data generally defining data for each of a set of vertices associated with a number of different image elements or image objects to be displayed on the screen 34 and possibly one or more lookup tables which will be described in more detail below. If desired, the lookup tables may be sent in, or may be decoded to be in, or may be generated by the map application 48 to be in the form of vector texture maps which are known types of data files typically defining a particular texture or color field (pixel values) to be displayed as part of an image created using vector graphics. More particularly, the vector data for each image element or image object may include multiple vertices associated with one or more triangles making up the particular element or object of an image. Each such triangle includes three vertices (defined by vertex data points) and each vertex data point has vertex data associated therewith. In one embodiment, each vertex data point includes vertex location data defining a two-dimensional or a three-dimensional position or location of the vertex in a reference or virtual space, as well as an attribute reference. Each vertex data point may additionally include other information, such as an object type identifier that identifies the type of image object with which the vertex data point is associated. The attribute reference, referred to herein as a style reference or as a feature reference, references or points to a location or a set of locations in one or more of the lookup tables downloaded and stored in the data memory 43.
To restyle map features based on the process of
Map features may include, for example, individual roads, text labels (e.g., map or street labels), areas, text boxes, buildings, points of interest markers, terrain features, bike paths, etc. Map features may be divided into features that represent actual physical items on a map surface such as roads, areas, and locations as well as map features that do not represent actual physical items, such as text labels and text boxes. In some embodiments, the claimed method and system may apply to only some map features or types of map features. For example, the claimed method and system according to one embodiment, may only apply to styling to a subset of map features such as those representing actual physical items.
A style may be defined as a set of display attributes or parameters that are used to determine the appearance of a map feature. A style may include a fill color (e.g., for area objects), an outline color, an outline width, an outline dashing pattern and an indication of whether to use rounded end caps (e.g., for road objects), an interior color, an interior width, an interior dashing pattern, and interior rounded end caps (e.g., for road objects), a text color and a text outline color (e.g., for text objects), an arrow color, an arrow width, an arrow dashing pattern (e.g., for arrow objects), a text box fill color and a set of text box outline properties (e.g., for text box objects) to name but a few. Of course, different ones of the vertex display attributes (or simply display attributes) provided may be applicable or relevant to only a subset of image objects or map features and thus the vertex style data points associated with a particular type of image object may only refer to a subset of the vertex attributes listed for each style. Alternatively, some styles may only be assignable to a particular image object or map feature based on the type of map feature. For example, road styles may be assignable only to road map features while area styles may be assignable only to area map features. In an embodiment, a style may reference one or more images such as a raster image or bitmap. The image may be used to render at least a portion of a map feature and/or may represent a style of the map feature. A display attribute of a style parameter may reference the image thereby associating the style to the image. The image may be, for example, a symbol representing a map feature, a portion of a map feature, or a type of a map feature. In an embodiment, the image may be a symbol or other label used to identify the map feature.
Generally speaking, the techniques for restyling or adjusting the appearance of map features may organize a set of styles based on a time parameter. These styles may then be assigned to map features of map data to adapt the map data to the user time parameter. Restyling the map data based on time may enable a map rendering system to provide time-based distinctions in map features that not only enhance a user experience by increasing map aesthetics, but provide additional context data to a map display so that a user may, for example, sort map data more efficiently or make better informed decisions on navigation and map feature searching based on time.
A user time parameter may generally indicate a current time or time period in which the user is operating a map rendering device to view one or more maps. In this situation, a current time or time period may be used to find relevant map data that is used to render map features. In an embodiment (to be described further below), the user time parameter may be a user adjustable or a user settable time. Corresponding map feature and style data may be retrieved based on the user set time. For example, where style data is organized according to the data associations of
In an embodiment, retrieved map data (e.g., of block 102) may have a set of map features with a set of default styles associated with the map features. In this case, the map features may be restyled (i.e., reassigned a new or modified style) based on time. The assignment may result in a data association such as those in
In an embodiment, a table such as that of
In
As discussed above, the style parameter may be used to determine how additional image data may be used to render the appearance of a map feature. For example, when drawing additional vector based map features, a style associated with a map feature may have one or more display attributes indicating what graphical elements may be added to provide the map feature with a particular appearance or style based on time.
In an embodiment, a display schema may be used to assign styles to map features based on time. A display schema is defined as a set of styles that are associated with a set of feature categories, where the set of styles may correspond to a time or a time period. A feature category may be any set of map features that may be associated with each other. Some examples of map feature categories or feature types may be roads, areas, or point locations. Map feature categories may be more specific (e.g., categories may have subcategories) such as highways, streets, and alleys, parks, monuments, restaurants, post offices, etc., to name a few. A display schema may be used to render an area of a map in a manner that not only indicates time, but also shows how a set of map features relate with each other based on time. The following figures illustrate how display schemas may be organized and implemented.
In an embodiment utilizing a time-based display schema, assigning a style to a map feature category may include assigning all map features belonging to that category a default style. In this case, a particular map feature that needs to be contrasted may have its default style overwritten with a new modified style. Many factors may contribute to whether a selection or subset of the features in a particular category may be assigned a different style instead of the default category style. For example, particular map features may be highlighted (i.e., assigned a different style other than a default style) to intentionally draw out a contrast between the selected map feature and other map features belonging to the highlighted map feature's category.
In an embodiment of a display schema, assigning a map feature category a style may include assigning map features belonging to the feature category a restricted subset of style parameters or a restricted range of style parameters. A map feature category, such as a road, may already have a limited number of styles that are applicable to the map feature. For example, a road may not be assigned a style for a building because the two styles are exclusive to their feature category. However, by restricting the range of styles that are available or assignable to a map feature category, a visual contrast or distinction may be conveyed between how a feature is generally rendered with time based information and how a feature is rendered without time-based information. In an embodiment, the restricted range of style parameters may include styles that are defined by the same set of display parameters but with different values for some of the display parameters. For example, only a single color attribute may differ between the range of styles assigned to a map feature category. A nighttime display schema, for example, may have all roads colored black, while a daytime display schema may have all roads colored yellow where all other display parameters for the road styles remain constant.
In an embodiment, the restricted subset of style parameters that may be assigned to a feature category, as defined by a display schema, may be based on a prominence rating of the subset of styles. Generally, style parameters may be used to distinguish different map features via a difference in display prominence of a style. A degree of prominence or a prominence rating may be determined based on a number of different factors, but generally refer to a degree of visual difference between styles (i.e., contrast).
The claimed method and system may use a display schema to define a subset of styles that can be assigned to a map feature category based on a prominence range of the styles that are assignable to the map feature category. For example, a display schema may determine that the styles used to render road features at night should have prominence ratings in a range of 4-9. Thus, if the styles of
Generally, a map feature corresponding to a physical item on a map (e.g., a road, an area, or a location) may be associated with one or more events, being ancillary map features that do not represent an actual physical item. The events may include, for example, hours of operation or, in the case of an entertainment venue such as a theater or concert park, one or more performances or shows.
In an embodiment, the ranking may be used to determine a corresponding style for rendering the selected map features. In particular a style may be assigned to selected map features based on an assigned rank. In an embodiment, higher ranked map features may be assigned styles that have a higher degree of prominence or higher prominence rating (see description above). In an embodiment in which a display schema restricts the range of assignable styles, the restricted set of styles may be applied based on matching prominence and rank. Thus, where a restricted set of styles for a point location is limited to a style having a prominence of 3 and a style having a prominence of 4.5, the style with prominence 4.5 may be assigned to high ranked map features in the selected set of map features and the style with prominence 3 may be assigned to low ranked features of the set. It should be noted that the map features of
Returning to
Generally, illumination during an evening period may be used to indicate several aspects of a map. First, illumination or styles showing illumination may be used to indicate the location of well lit streets for the purposes of navigational safety. For example, a user who is uncomfortable traveling at night may use the illumination styling information to decide to route his trip only through well lit areas for safety and/or ease of navigation. Second, the illumination mapping may be used to determine areas of nightlife. More particularly, well lit areas may represent areas in which one or more points of interest may be located for a given evening period. In an embodiment in which the illumination is current or is based on a live feed of data, the illumination may be used to indicate locations of current evening events such as a concert or other outdoor events. The illumination may also indicate a central business district that is generally well lit. Moreover, night time illumination may also indicate the general presence of populated versus non-populated areas and may be further used to determine navigation options at night. Of course, the uses of a display schema that includes night time illumination include may more examples than those listed. In an embodiment, nighttime illumination may be represented differently for a first set of map features than a second set of map features.
Generally,
Any suitable subset of the blocks may be implemented in any suitable order by a number of different devices (e.g., client or server) and remain consistent with the method and system described herein. Moreover, additional determination blocks may be added to refine the filtering of style parameters subject to interpolation processing.
Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.
For example, the network 25 may include but is not limited to any combination of a LAN, a MAN, a WAN, a mobile, a wired or wireless network, a private network, or a virtual private network. Moreover, while only four client devices are illustrated in
Additionally, certain embodiments are described herein as including logic or a number of components, modules, or mechanisms. Modules may constitute either software modules (e.g., code embodied on a machine-readable medium or in a transmission signal) or hardware modules. A hardware module is tangible unit capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein.
In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
Accordingly, the term hardware should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where the hardware modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time.
Hardware and software modules can provide information to, and receive information from, other hardware and/or software modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple of such hardware or software modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware or software modules. In embodiments in which multiple hardware modules or software are configured or instantiated at different times, communications between such hardware or software modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware or software modules have access. For example, one hardware or software module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware or software module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware and software modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information).
The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules.
Similarly, the methods or routines described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or processors or processor-implemented hardware modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations.
The one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., application program interfaces (APIs).)
The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations.
Some portions of this specification are presented in terms of algorithms or symbolic representations of operations on data stored as bits or binary digital signals within a machine memory (e.g., a computer memory). These algorithms or symbolic representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others skilled in the art. As used herein, an “algorithm” or a “routine” is a self-consistent sequence of operations or similar processing leading to a desired result. In this context, algorithms, routines and operations involve physical manipulation of physical quantities. Typically, but not necessarily, such quantities may take the form of electrical, magnetic, or optical signals capable of being stored, accessed, transferred, combined, compared, or otherwise manipulated by a machine. It is convenient at times, principally for reasons of common usage, to refer to such signals using words such as “data,” “content,” “bits,” “values,” “elements,” “symbols,” “characters,” “terms,” “numbers,” “numerals,” or the like. These words, however, are merely convenient labels and are to be associated with appropriate physical quantities.
Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information.
As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. For example, some embodiments may be described using the term “coupled” to indicate that two or more elements are in direct physical or electrical contact. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still cooperate or interact with each other. The embodiments are not limited in this context.
As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the description. This description should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise.
Still further, the figures depict preferred embodiments of a map rendering system for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.
Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for a system and a process for rendering map or other types of images using the principles disclosed herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope defined in the appended claims.
This application claims the benefit of U.S. Provisional Patent Application No. 61/673,598 that was filed on Jul. 19, 2012 the disclosure of which is hereby incorporated herein by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
61673598 | Jul 2012 | US |