Computing devices may display maps to help a user to determine a route to reach a destination, plan out an itinerary for a trip, or perform other functions. For example, a user may enter a starting location and a destination location, and the computing device may display on the map indications of one or more routes between the starting location and the destination location.
Examples are disclosed that relate to inking inputs made to a map displayed on a computing device. One example provides, on a computing device, a method comprising displaying a map via a display device operatively coupled to the computing device, receiving user input of one or more inking inputs made relative to the displayed map, and in response displaying over the map an annotation for each inking input received. The method further comprises determining a map location of each of the one or more inking inputs, determining an intended meaning of each of the one or more inking inputs based upon one or more features of the inking inputs, and performing an action on the computing device based at least on the map location and the intended meaning determined for each of the one or more inking inputs.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
Map applications on computing devices may allow a user to plan trips or choose an efficient route to reach a given destination via a graphical user interface that displays a map to the user. However, such maps may require the user to type a starting address and a destination into text entry fields to generate a route, and may generate and display a route based upon efficiency, sometimes with alternative routes. If different modes of transportation or different routes are preferred, an application may require a user to enter additional input selecting a route via another transportation mode (e.g. a bus or train), moving the route to a more scenic one, etc. Such interactions with the map may be cumbersome and/or time-consuming. Further, a user may not be able to create a multi-day itinerary that is displayed on a single map page, nor may a user be able to conveniently select multiple destinations within a region and have a route automatically determined by the map application.
Thus, examples are disclosed herein that may help to address these and other issues. Briefly, a user may make inking inputs to a map application that is displaying a map on a computing device, and the computing device may interpret the inking input and perform associated actions in response to the inking input. As used herein, the term ink or inking may refer to annotations to displayed content (e.g. a displayed map) in the form of displayed marks/strokes made via an input device, and the term inking input and the like represent inputs used to input such inking. Such inputs may be made via a stylus or finger on a touch sensor, via a gesture detection system (e.g. one or more cameras, depth cameras and/or motion sensors configured to capture body part gestures, such as finger/arm/eye gestures), or via any other suitable input mechanism.
In some examples, the user may create links between inking input features (e.g. inking shapes, inking text, inking line types (dashed v. solid), inking colors, inking input velocity characteristics, inking input pressure characteristics, etc.) and specific actions a map application may perform. For example, a map application may provide a “planning a trip mode” in which a user is instructed to use pen or touch to select or draw a shape per each day/week he or she is going to be on the trip. The user may designate a title for each shape, such as the day of the week associated with that shape. Next, the user may draw the designated shapes onto the map using inking inputs to indicate locations that the user wishes to visit each day/week of the trip. For example, circles could be drawn at selected locations to represent places to visit on Monday, squares to represent places to visit on Tuesday, and a shape associated with “don't forget” (or the actual words “don't forget”) may be used for must-see places. While the user is drawing the shape on top of the places desired to visit, an inking engine saves the path into a “recognizer” for the future. In some examples, a user may enter text-based inking inputs in addition or alternatively to the shape-based inking inputs, and the text-based inputs may be recognized by a word recognizer. For example, a user may circle a destination and write Monday next to the circle to represent the location is a place to visit on Monday. After drawing the items, a new sub-collections folder or list may be created under a “trip itinerary” collection, either automatically or by user input (e.g. selection of a “done” user interface control) signifying that the locations for that trip have been entered. The user may then see a single view of the map with the whole itinerary, may filter the map view by day (e.g. to show just Monday's places, just must-see places, etc.) or arrange the view by any other suitable categories, and/or take other suitable actions.
Furthermore, one or more of those places may have additional detail displayed (e.g. as a “card” associated with the item). The card may have any suitable information about a location, including but not limited a phone number, address, pictures, etc. The information displayed on the card may be obtained in any suitable manner, such as via a web search conducted by the computing device upon receiving the inking input associated with the location.
Thus, by informing the map application what shapes to recognize, a user may quickly and easily enter trip information on a map and then display the trip information in various different ways. It will be understood that each shape or other annotation may be given any desired meaning based upon how the user later wishes to view the information. As another example, shapes or other annotations may be defined by a type of location (e.g. waterfalls, wineries, state parks, etc.), and routes may be planned between locations of desired types by filtering the view by location type.
A user's hand 106 is illustrated in
In response to receiving the inking input, computing device 100 may execute one or more actions associated with the inking input. For example, in response to receiving the circle annotation around the intersection on the map, the computing device may display information associated with that location (e.g., address, business information, etc.). Also, the computing device 100 may use the circled location as a starting location for a route, as described in more detail below, or may execute any other suitable function in response to detecting and interpreting the inking input.
At 204, method 200 includes receiving an inking input. For example, the inking input may include touch input made to a touch-sensitive display via stylus or finger, as indicated at 206, or may include any other suitable input. At 208, method 200 includes displaying an annotation as inking on the map (e.g. a graphical representation of a path of the inking input), and at 210, determining a location on the map that corresponds to the location of the inking input. When the inking input covers more than one map address (e.g., a user-input circle inadvertently includes multiple map addresses), any suitable mechanism may be used to disambiguate which address the user intended to ink over, including but not limited to identifying the center-most location, identifying a most likely location (e.g. the largest town within the inking input area), identifying a most popular address (e.g. based upon prior behavior of the user and/or other users as tracked via a remotely located map server), or other suitable mechanism. Further, some inking inputs may be intended to select multiple locations. In such instances, each of the multiple locations may be associated with the inking input.
At 212, method 200 includes determining an intended meaning of the inking input. The intended meaning may be determined in any suitable manner. In one example, the computing device may store a table or other data structure that indexes inking input features (e.g., annotation shapes, words, numbers, colors, input characteristics such as speed or pressure, etc.) to respective intended meanings. The association between each inking input feature and intended meaning may be predetermined (e.g. coded into the application at development time), or may be user-defined. In one example, the computing device may display a drop-down menu each time the user enters an inking input with a new feature, and the user may select from among a list of possible meanings displayed within the drop-down menu in order to assign a meaning to the inking input feature. In another example, the computing device may learn which meaning the user intended to input based on previous user interactions with the map application. In yet another example, a user may define a first use instance of an inking input feature with text input (e.g. also made by inking), wherein the text defines the meaning of the inking feature. In such examples, the computing device may interpret the inked text and then store the interpretation of the inked text as the intended meaning for that feature. One example of such an input would be an inking input associating a shape with a day of the week. Additionally, an intended meaning may be determined collectively for multiple inking inputs, such as where a user draws two circles on a map, one representing a starting location and one representing a destination location, to determine a route between the locations.
Any suitable features of an inking input may be identified to determine an intended meaning. Examples include, but are not limited to, a shape of the inking input, a color of the inking input, a size of the inking input, a pressure of the user input strokes, a pattern of the input strokes (e.g. solid v. dashed), and a speed of the user input strokes. Determining the shape of the inking input may include, for example, determining whether the input comprises a straight line, circle, square, or other shape, determining whether the shape includes solid lines, dashed lines, or other line type, and determining whether letters and/or numbers are represented by the inking input (e.g. identifying text in the inking input). In some instances the user may enter more than one inking input (e.g., the user may circle two locations and draw a line between them), and the map location and features of each inking input may be determined. In such an example, a solid line drawn between the circles may represent one desired route characteristic (e.g. most efficient) while a dashed line may indicate another desired route characteristic (e.g. most scenic). In each of the examples described above, the intended meaning of the inking input also may be determined based at least in part on the features of the map displayed, such as level of zoom, geographic features represented by the map (e.g., ocean versus land), and/or other features. For example, if the map is displayed at a relatively low level of zoom (e.g., an entire continent is displayed), the computing device may determine that the user intends to determine a route via plane rather than via bus or bike.
Continuing with
Thus, in some examples, the computing device may receive a plurality of user-defined meanings each associated with an associated inking input, via user input/selection of the meanings and associated inking inputs. When a map is displayed, the user may enter two or more inking inputs on the displayed map. The computing device may receive these inking inputs and determine a map location of each of the two or more inking inputs as well as the intended meaning of each of the two or more inking inputs based upon the plurality of user-defined meanings provided previously. In response to receiving the two or more inking inputs, the computing device may display a route between corresponding locations of the two or more inking inputs. The route may be selected from among a plurality of possible routes based on the intended meaning of each inking input. For example, as described above, a scenic route may be selected when the inking inputs indicating the corresponding locations are linked with a dashed or arc-shaped line, while a fastest route may be selected when the inking input between the corresponding locations is a solid straight line.
Accordingly, as shown in
Next.
Thus, inking inputs may be used as a way to express different collections on a map or as a way to quickly determine map-related operations, such as a route between a set of points on the map. Inking inputs further may be used to perform other functions than those shown. For example, in the case of a route calculation, a specific-shaped inking input may be used to indicate the user desires the fastest route between the two points instead of having to fill in the “From” and “To” on a Directions search box, then click on go, and then turn on traffic. For example, as explained above a straight lines drawn between two locations may indicate a fastest route is desired, while an arc-shaped line drawn between two locations may indicate that a scenic route is desired. Further, a user may use an inking input to enter a time of day he or she would like to arrive or start their trip, e.g., “Start at 9 AM” next to a star symbol, and the routing algorithm would start that route at 9 AM. This may help to choose a route depending upon daily traffic patterns. Further still, a user may write “Bus” or “Train” to indicate that they would like the route to be via transit, instead of driving.
As another example, a user may draw a “reminder” symbol on a map along with additional information via text (e.g., dinner Wednesday at 7), and the computing device may store and later output a reminder to the user to attend dinner at the specified location at the specified time. In some examples, the computing device may communicate the actions associated with the annotations to a personal assistant device/application or other communicatively coupled device. As such, a personal assistant device may receive the reminder from the map application and then later output the reminder.
In some examples, the methods and processes described herein may be tied to a computing system of one or more computing devices. In particular, such methods and processes may be implemented as a computer-application program or service, an application-programming interface (API), a library, and/or other computer-program product.
Computing system 600 includes a logic machine 602 and a storage machine 604. Computing system 600 may optionally include a display subsystem 604, input subsystem 606, communication subsystem 608, and/or other components not shown in
Logic machine 602 includes one or more physical devices configured to execute instructions. For example, the logic machine may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result.
The logic machine may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic machine may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. Processors of the logic machine may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic machine optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic machine may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration.
Storage machine 604 includes one or more physical devices configured to hold instructions executable by the logic machine to implement the methods and processes described herein. When such methods and processes are implemented, the state of storage machine 604 may be transformed—e.g., to hold different data.
Storage machine 604 may include removable and/or built-in devices. Storage machine 604 may include optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. Storage machine 604 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
It will be appreciated that storage machine 604 includes one or more physical devices. However, aspects of the instructions described herein alternatively may be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.), as opposed to being stored on a storage medium.
Aspects of logic machine 602 and storage machine 604 may be integrated together into one or more hardware-logic components. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
When included, display subsystem 606 may be used to present a visual representation of data held by storage machine 604. This visual representation may take the form of a graphical user interface (GUI). As the herein described methods and processes change the data held by the storage machine, and thus transform the state of the storage machine, the state of display subsystem 606 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 606 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic machine 602 and/or storage machine 604 in a shared enclosure, or such display devices may be peripheral display devices.
When included, input subsystem 608 may comprise or interface with one or more user-input devices such as a keyboard, mouse, touch screen, or game controller. In some examples, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition, an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity.
When included, communication subsystem 610 may be configured to communicatively couple computing system 600 with one or more other computing devices. Communication subsystem 610 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network, or a wired or wireless local- or wide-area network. In some examples, the communication subsystem may allow computing system 600 to send and/or receive messages to and/or from other devices via a network such as the Internet.
Another example provides a method enacted on a computing device. The method includes displaying a map on a display device operatively coupled to the computing device, receiving user input of one or more inking inputs on the displayed map and displaying an annotation for each inking input received, determining a map location of each of the one or more inking inputs, determining an intended meaning of each of the one or more inking inputs based upon one or more features of the one or more inking inputs, and performing an action on the computing device based at least on the map location and the intended meaning determined for each of the one or more inking inputs. The inking input may additionally or alternatively include a shape, and the intended meaning may additionally or alternatively be determined based at least in part on the shape. The inking input may additionally or alternatively include text, and the intended meaning may additionally or alternatively be determined based at least in part on the text. The inking input may additionally or alternatively include a color, and the intended meaning may additionally or alternatively be determined based at least in part on the color. Determining the intended meaning of each of the one or more inking input may additionally or alternatively include determining a predefined meaning associated with each feature of the one or more features of the one or more inking inputs. Determining the intended meaning of each of the one or more inking inputs may additionally or alternatively include determining a user-defined meaning associated with each feature of the one or more features of the one or more inking inputs. Such an example may additionally or alternatively further include performing a search for information regarding a location associated with a selected inking input, and displaying search results for the location associated with the selected inking input. Receiving user input of one or more inking inputs on the displayed map may additionally or alternatively include receiving a plurality of inking inputs at a plurality of corresponding locations, and performing an action may additionally or alternatively include displaying a route between the plurality of corresponding locations. The plurality of inking inputs may additionally or alternatively include two or more different inking inputs that represent different filtering parameters, and such an example may additionally or alternatively include receiving a user input requesting to apply a filtering parameter to display a route between locations corresponding to the filtering parameter applied, and in response displaying a route between the locations based upon the filtering parameter applied. Performing an action may additionally or alternatively include performing a search for information on a selected location associated with an inking input, and displaying search results for the selected location. Any or all of the above-described examples may be combined in any suitable manner in various implementations.
Another example provides for a computing system including a display device, a processor, and memory storing instructions executable by the processor to send a map to the display device, the display device configured to display the map, receive user input of one or more inking inputs on the displayed map, determine a map location of each of the one or more inking inputs, determine an intended meaning of each of the one or more inking inputs based upon one or more features of each inking input, and perform an action based at least on the determined map location and the intended meaning of each inking input. The instructions may additionally or alternatively be executable to determine the intended meaning for each inking input based at least in part on a shape of the inking input. The instructions may additionally or alternatively be executable to determine the intended meaning from text represented by the inking input. The instructions may additionally or alternatively be executable to determine the intended meaning from an inking input color. The instructions may additionally or alternatively be executable to determine a predefined meaning associated with each of one or more of the inking inputs. The instructions may additionally or alternatively be executable to determine a user-defined meaning associated with each of one or more of the inking inputs. The instructions may additionally or alternatively be executable to perform a search for information regarding a location associated with a selected inking input, and display search results for the location associated with the selected inking input. The instructions may additionally or alternatively be executable to receive a plurality of inking inputs at a plurality of corresponding locations, and to perform an action by displaying a route between the plurality of corresponding locations. The plurality of inking inputs may additionally or alternatively include two or more different inking inputs that represent different filtering parameters, and the instructions may additionally or alternatively be executable to receive a user input requesting to apply a filtering parameter to display a route between locations corresponding to the filtering parameter applied, and in response display a route between the locations based upon the filter parameter applied. Any or all of the above-described examples may be combined in any suitable manner in various implementations.
Another example provides a computing system including a display device, a processor, and memory storing instructions executable by the processor to receive a plurality of user-defined meanings each associated with an associated inking input, display a map on the display device, receive user input of two or more inking inputs on the displayed map, determine a map location of each of the two or more inking inputs, determine an intended meaning of each of the two or more inking inputs based upon the plurality of user-defined meanings, and display a route between corresponding locations of the two or more inking inputs, the route selected from among a plurality of possible routes based on the intended meaning of each inking input.
It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific examples or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
This application claims priority to U.S. Provisional Patent Application No. 62/314,290, filed Mar. 28, 2016, the entirety of which is hereby incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62314290 | Mar 2016 | US |