1. Technical Field
The subject disclosure relates to mobile computing devices, and the provision of data and/or network services based on gesture and location information of the devices.
2. Background
By way of background concerning some conventional systems, mobile devices, such as portable laptops, PDAs, mobile phones, navigation devices, and the like have been equipped with location based services, such as global positioning system (GPS) systems, WiFi, cell tower triangulation, etc. that can determine and record a position of mobile devices. For instance, GPS systems use triangulation of signals received from various satellites placed in orbit around Earth to determine device position. A variety of map-based services have emerged from the inclusion of such location based systems that help users of these devices to be found on a map and to facilitate point to point navigation in real-time and search for locations near a point on a map.
However, such navigation and search scenarios are currently limited to displaying relatively static information about endpoints and navigation routes. While some of these devices with location based navigation or search capabilities allow update of the bulk data representing endpoint information via a network, e.g., when connected to a networked portable computer (PC) or laptop, such data again becomes fixed in time. Accordingly, it would be desirable to provide a set of richer experiences for users than conventional experiences predicated on location and conventional processing of static bulk data representing potential endpoints of interest. In addition, considering the complexity of input on touchscreens or tiny alphanumeric keypads typically provided for portable electronic devices, current ways for invoking benefits of location-based services are inadequate.
The above-described deficiencies of today's location based systems and devices are merely intended to provide an overview of some of the problems of conventional systems, and are not intended to be exhaustive. Other problems with the state of the art and corresponding benefits of some of the various non-limiting embodiments may become further apparent upon review of the following detailed description.
A simplified summary is provided herein to help enable a basic or general understanding of various aspects of exemplary, non-limiting embodiments that follow in the more detailed description and the accompanying drawings. This summary is not intended, however, as an extensive or exhaustive overview. Instead, the sole purpose of this summary is to present some concepts related to some exemplary non-limiting embodiments in a simplified form as a prelude to the more detailed description of the various embodiments that follow.
Direction based pointing services are provided for portable devices or mobile endpoints. Mobile endpoints can include a positional component for receiving positional information as a function of a location of the portable electronic device, a directional component that outputs direction information as a function of an orientation of the portable electronic device and a motion component that outputs motion information e.g., from accelerometer(s), based on motion experienced by the portable device. Based on an analysis of any one or more of the motion information, the direction information or the location information, a set of device gestures are determined.
At least one processor of the device processes the any one or more of the motion information, the direction information or the location information to determine a gesture or set of gestures, which determine action or a request for action to be taken with respect to location based services. For a non-limiting example, gesture(s) can be determined based on an analysis of path information determined from the motion information as well as direction information of the device for a given period of time while undergoing the two-dimensional (2-D) or three-dimensional (3-D) path defined by the path information. Alternatively, the information can be wholly or partly processed by a network service.
Where applicable for a given scenario, to determine an item to which the gesture applies, the at least one processor processes the positional information and the direction information to determine a subset of points of interest relative to the portable electronic device as a function of the positional information and/or the direction information.
Devices or endpoints can include compass(es), e.g., magnetic or gyroscopic, to determine a direction and location based systems for determining location, e.g., GPS. To supplement the positional information and/or the direction information, devices or endpoints can also include component(s) for determining displacement, speed and/or acceleration information for processing.
With the addition of directional information and gesturing in a location based ecosystem, a variety of service(s) can be provided on top of user identification or interaction with specific object(s) of interest. For instance, when a user points at a particular item at a particular location or place, this creates an opportunity for anyone having an interest in that particular item to communicate, or otherwise interact, with the user regarding that item or related items at a point in time when the user's focus is on the particular item. User context for the interaction can also be taken into account to supplement the provision of one or more interactive direction based services.
In another embodiment, a computer system displays location-based point of interest data. The computer system identifies a point of interest that is spatially-related to the computer system. This identification is based at least on a detected geographical location of the computer system, and on directional information received from one or more directional sensors. The computer system displays a first visualization of a first type that represents the geographical location, including displaying a first indication of the point of interest within the first visualization. Based on detecting a gesture, the computer system displays a second visualization of a second type that also represents the geographical location, including displaying a second indication of the point of interest within the second visualization.
These and many other non-limiting embodiments are described in more detail below.
Various non-limiting embodiments are further described with reference to the accompanying drawings in which:
Overview
As discussed in the background, among other things, current location services systems and services, e.g., GPS, cell triangulation, P2P location service, such as Bluetooth, WiFi, etc., tend to be based on the location of the device only, and tend to provide static experiences that are not tailored to a user because the data about endpoints of interest is relatively static. In addition, input to engage such static location based services is frustrating at best for portable devices, such as cell phones, PDAs, music players, notebooks, netbooks, etc. For instance, input to such devices when the user is “on the go” has been conventionally limited to error prone input processes, e.g., due to limited space, which are error prone even when a user is not moving and the device is stationary.
At least partly in consideration of these deficiencies of conventional location based services, various embodiments of a portable device are provided that enable users to point a device directionally and receive static and/or dynamic information in response from a networked service, such as provided by one or more servers, or as part of a cloud services experience. Moreover, by determining gestured made by the device based on any one or more of direction information, motion information or location information, input for various scenarios and device contexts are greatly facilitated, and can be tailored to context based on the location, or given point(s) of interest pointed at by a pointing device.
In the various alternative embodiments described herein, leveraging digital compasses and location services to provide direction and location information enables a next-generation of direction or pointer based location search services, scan services, discoverability services, etc. In this regard, the digital compass and location information, such as GPS, can be used to point at objects of interest, thus defining the entry point for one or more data transactions or interactions between the device and one or more third party devices providing service(s) for the object(s) of interest at which the device is pointed. Using a digital compass, e.g., solid state, magnetic, sun/moon based, etc. on a mobile endpoint facilitates point and upload scenarios, point and synchronize geographical information to a Web service, cloud services or another endpoint.
As reflected in various embodiments, a device is provided that can hone in on, interact with, or otherwise transact with, a specific object or specific objects of interest by way of location and direction of the device, creating a new advertising model not previously known. As an example, when a user interacts with a particular product on a shelf at a retail store in connection with a direction based service, this creates an opportunity for anyone having an interest in the particular product to engage the user, e.g., communicate some information to that user. Any context that can be discerned from the user's actions and interactions can also be taken into account when acting on the opportunity. In this regard, a variety of gestures can facilitate these actions and interactions without requiring the complexity of input alluded to in the background.
In this regard, with a gesture (pre-defined or user defined), users can interact with the endpoints in a host of context sensitive ways to provide or update information associated with endpoints of interest, or to receive beneficial information or instruments (e.g., coupons, offers, etc.) from entities associated with the endpoints of interest, or according to any of the many non-examples described in more detail below.
In one embodiment, a portable electronic device comprises a positional component that outputs position information as a function of a location of the portable electronic device, a motion component that outputs motion information as a function of movement(s) of the portable device and a directional component that outputs direction information as a function of an orientation of the portable electronic device. The device is configured to process at least the position information to determine point(s) of interest relating to the position information and configured to process at least the motion information and the direction information to determine pre-defined gesture(s) undergone by the portable electronic device with respect to the point(s) of interest, wherein the portable electronic device automatically makes a request based on the pre-defined gesture(s) and the point(s) of interest.
The point(s) of interest can be determined from the position information and the direction information. The at least one pre-defined gesture can be determined from any one or more of the position information, the motion information and the direction information. The portable electronic device can automatically make a request based on the gesture(s) and identifier(s) associated with the point(s) of interest. The gesture(s) can be determined based on a pre-defined gesture definition or a user-defined gesture definition. A positional component can include a global positioning satellite (GPS) component for receiving and processing GPS signals or a component for receiving position information based on triangulation to wireless base stations, an image recognition system for recognizing at least one object in image data and determining a position of the device relative to the at least one object in the image data, or other means for measuring location.
The directional component can include a digital compass and can also include an image recognition system for recognizing an object in real space and determining the direction of the object and therefore the device by detecting the side of the object, or detecting the object relative to other objects fixed in real space. The motion component can include accelerometer(s) for measuring an acceleration of the device. The motion component can include at least two accelerometers for measuring a tilt or rotation of at least part of the device.
In one embodiment, a process determines a location of a portable device based on location information determined for the device, the location information representing a global position of the device. Direction information representing an orientation of the portable device and the location information are analyzed to determine point(s) of interest towards which the portable device is substantially oriented. In this regard, path information representing a path traversed by the portable device is analyzed based on at least the direction information to determine gesture(s) made by the portable device. A request is transmitted to a network service based on the gesture(s) and the point of interest.
The analyzing of path information can include processing acceleration information measuring acceleration of the device, processing velocity information measuring velocity of the device, analyzing the path information for a given time span or analyzing a set of vectors representing the path traversed by the device from a start time to a stop time. Moreover, the analyzing of path information can include analyzing three dimensional (3-D) path information representing three degrees of freedom of movement for the device, but can also include analyzing three dimensional (3-D) path information as 2-D path information by collapsing a degree of freedom.
In another embodiment, a method includes determining whether a viewing plane of a portable device is aligned with a substantially horizontal plane that is substantially parallel to a ground plane or aligned with a substantially vertical plane that is substantially orthogonal to the ground plane. If the portable device is aligned with the substantially horizontal plane, a topographical map view of a geographical area map determined based on location and direction information measured by the portable device is displayed and indication(s) of point(s) of interest on the geographical area map are displayed. If the portable device is aligned with the substantially vertical plane, an image based view of three-dimensional (3-D) space extending at least one pre-defined direction from the portable device is displayed and indication(s) of point(s) of interest pertaining to the 3-D space represented by the image based view can be displayed.
Details of various other exemplary, non-limiting embodiments are provided below
Gesture Based Input to Computing Device with Direction Information
With the addition of directional information in a location based environment, a variety of mobile scanning experiences are enabled on top of user identification of or interaction with specific object(s) of interest by pointing, or gesturing, at an object of interest. For instance, when a user gestures, e.g., points, at a particular item at a particular location or place, this creates an opportunity for anyone having an interest in that particular item to interact with the user regarding that item or related items at a point at a time when the user's focus is on the particular item. User context for the interaction can also be taken into account to supplement the provision of one or more interactive direction based services.
A gesture subsystem can optionally be included in a device, which can be predicated on any one or more of the motion information, location information or direction information. In this regard, not only can direction information and location information be used to define a set of unique gestures, but also motion information (such as speed and acceleration) can be used to define a more sophisticated set of gestures. In this regard, one can appreciate that a variety of algorithms could be adopted for a gesture subsystem. For instance, a simple click-event when in the “pointing mode” for the device can result in determining a set of points of interest for the user.
The pointing information, however produced according to an underlying set of measurement components and interpreted by a processing engine, can be one or more vectors. A vector or set of vectors can have a “width” or “arc” associated with the vector for any margin of error associated with the pointing of the device. A panning angle can be defined by a user with at least two pointing actions to encompass a set of points of interest, e.g., those that span a certain angle defined by a panning gesture by the user.
In this respect, a gesturing component can also be included in the device to determine a current gesture of a user of the portable electronic device from a set of pre-defined gestures. For example, gestures can include zoom in, zoom out, panning to define an arc, all to help filter over potential subsets of points of interest for the user.
In addition, a device includes an algorithm for discerning items substantially along a direction at which the device is pointing, and those not substantially along a direction at which the device is pointing. In this respect, while motion vector might implicate POI, without a specific panning gesture that encompassed more directions/vectors, POIs would likely not be within the scope of points of interest defined by motion vector. The distance or reach of a vector can also be tuned by a user, e.g., via a slider control or other control, to quickly expand or contract the scope of endpoints encompassed by a given “pointing” interaction with the device.
Other gestures that can be of interest in for a gesturing subsystem include recognizing a user's gesture for zoom in or zoom out. Zoom in/zoom out can be done in terms of distance. A device pointed in direction may include a zoomed in view which includes points of interest within distance and arc, or a medium zoomed view representing points of interest between distance, or a zoomed out view representing points of interest beyond distance. These zoom zones correspond to POIs. More or less zones can be considered depending upon a variety of factors, the service, user preference, etc.
For another non-limiting example, with location information and direction information, a user can input a first direction via a click, and then a second direction after moving the device via a second click, which in effect defines an arc for objects of interest. For instance, via first pointing act by the user at time in direction and a second pointing act at time by the user in direction, an arc is implicitly defined. The area of interest implicitly includes a search of points of object within a distance, which can be zoomed in and out, or selected by the service based on a known granularity of interest, selected by the user, etc. This can be accomplished with a variety of forms of input to define the two directions. For instance, the first direction can be defined upon a click-and-hold button event, or other engage-and-hold user interface element, and the second direction can be defined upon release of the button. Similarly, two consecutive clicks corresponding to the two different directions can also be implemented.
Also, instead of focusing on real distance, zooming in or out could also represent a change in terms of granularity, or size, or hierarchy of objects. For example, a first pointing gesture with the device may result in a shopping mall appearing, but with another gesture, a user could carry out a recognizable gesture to gain or lose a level of hierarchical granularity with the points of interest on display. For instance, after such gesture, the points of interest could be zoomed in to the level of the stores at the shopping mall and what they are currently offering.
In addition, a variety of even richer behaviors and gestures can be recognized when acceleration of the device in various axes can be discerned. Panning, arm extension/retraction, swirling of the device, backhand tennis swings, breaststroke arm action, golf swing motions could all signify something unique in terms of the behavior of the pointing device, and this is to just name a few motions that could be implemented in practice. Thus, any of the embodiments herein can define a set of gestures that serve to help the user interact with a set of services built on the pointing platform, to help users easily gain information about points of information in their environment.
Furthermore, with relatively accurate upward and downward tilt of the device, in addition to directional information such as calibrated and compensated heading/directional information, other services can be enabled. Typically, if a device is ground level, the user is outside, and the device is “pointed” up towards the top of buildings, the granularity of information about points of interest sought by the user (building level) is different than if the user was pointing at the first floor shops of the building (shops level), even where the same compass direction is implicated. Similarly, where a user is at the top of a landmark such as the Empire State building, a downward tilt at the street level (street level granularity) would implicate information about different points of interest that if the user of the device pointed with relatively no tilt at the Statue of Liberty (landmark/building level of granularity).
A device can also include a Hardware Abstraction Layer (HAL) having components responsible for abstracting the way the client communicates with the measuring instruments, e.g., the GPS driver for positioning and LOS accuracy (e.g., open eGPS), magnetic compass for heading and rotational information (e.g., gyroscopic), one or more accelerometers for gestured input and tilt (achieves 3D positional algorithms, assuming gyroscopic compass).
In cooperation with gesture based analysis component 102, and optionally local applications or services 160 (or remote services 135), processor(s) 110 process the position information and/or the direction information to determine a set of points of interest relating to the position/direction information. Processor(s) 110 also process the motion information, direction information and/or position information to determine pre-defined gesture(s) undergone by the portable electronic device with respect to one or more points of interest of the set. In response to the pre-defined gesture(s), the portable electronic device automatically makes a request based on the pre-defined gesture(s) and identifier(s) associated with the one or more points of interest of the set.
The gesture based analysis component 102 can determine a set of current gesture(s) 104 based on one or more of the position information, such as but not limited to GPS information, output from position engine or subsystem 120, the motion information, such as but limited to accelerometer information, of motion engine or subsystem 130, or the direction information, such as digital compass information, output from direction engine or subsystem 140. Gesture based analysis component 102 determines gesture(s) 104 relative to gesture definitions 106, which can be statically defined on the device, defined by the user of the device, retrieved from a gesture definition network provider (not shown), etc. Gesture history 108 coupled with other place and point of interest information can be a rich source for intelligent applications 160 or network services 135 to understand context for a given device gesture based on historical interaction.
Device 100 can include storage 150 for storing any of position information, motion information, direction information, gesture definitions 106, gesture history 108, application information, etc. The device 100 can also include a graphics subsystem display and associated user interface 180 for display of information and/or for receiving touchscreen input. An audio subsystem 170 can also be included for voice or other sound input, or sound output in connection with the provision of gesture and pointing based services.
For instance, via network interface 190, based on a current gesture 104, an automatic request 115 can be made to network/data services 135 based on the gesture and place or point of interest identification. As a result, a variety of actions 125 can take place, e.g., targeted content, advertising, offers, deals, price comparisons, etc. Local applications 160 and storage 150 are optional as any of the functionality of providing gesture based services can be pushed to the network data services 135, or conversely, functionality of data services 135 can be implemented by a local application 160.
At 400, gesture(s) undergone by a mobile device are determined based on direction/motion information, location information, etc., i.e., the path of the device is discerned in 2-D or 3-D. At 410, a search key representing intent of interaction with the pointer device at a current location is generated. Direction information can be basis for determining intent. At 420, the current location and search key are transmitted to the service(s). Based on the current location and given search key, content from network service providers may be received. At 430, the content from the network service providers is displayed, e.g., online retailers, a provider from the current location, or third party entities having an interest in the current location/search key pair. At 440, the network service providers can be monetized by a billing system, for the opportunity to provide the content at a relevant moment for the device user.
The collective information 650 can be used to gain a sense of not only where the device 600 is located in relation to other potential points of interest tracked or known by the overall set of services 660, to understand what direction the user is pointing the device 600 so that the services 660 can appreciate at whom or what the user is pointing the device 600 and to further gain a sense of how the user wishes to interact with the place or point of interest via the gesture information 672.
Gesture subsystem 670 can be predicated on any one or more of the motion information 612, location information 622 or direction information 632. In this regard, not only can direction information 632 and location information 622 be used to define a set of unique gestures, but also motion information 612 (such as speed and acceleration) can be used to define a more sophisticated set of gestures.
In this regard,
Similarly, the following are some non-limiting examples of shopping experiences and interactions that can be invoked by way of explicit input or a pre-defined gesture via reviews user interface 830: information 832 about an item of interest, e.g., a currently selected item, request for user reviews 834, request for expert reviews 836, or request for graphical representations of product performance and/or audio rendering 838.
To further illustrate, the following are some non-limiting examples of shopping experiences and interactions that can be invoked by way of explicit input or a pre-defined gesture via request for price compare user interface 850: request for lowest price ranking 852, request for quantity/size ranking 854, request for best feedback ranking 856, request for deal information 858, e.g., insertion of coupon for 30% off in store item to encourage shopper to buy in store.
Like
More specifically,
A non-limiting implementation of scope definition 1040 is shown in scope of points of interest UI 1050 in which the user selects a degree of scope ranging for near 1052 to far 1054. A similar exercise can be applied to set an elevation scope to capture towering POIs, such as buildings, mountains, lighthouses, etc. In this example, POI 1022 is far whereas POI 1024 is near, and so depending on how the gesture or input is made, one or the other POI may be selected based on the scope of POIs. In one embodiment, a tilt gesture achieves the desired effect, by tilting the viewfinder of a camera up, for example, the scope is extended outwards, whereas by tilting toward the ground, the scope retracts inwards. A user can pre-define unique gestures via a gesture definition application that help the user customize gestures of the phone that take action.
Once a person or persons are selected, a user can gesture with the device in a variety of pre-defined or user defined ways per the following non-limiting examples: initiate look through viewfinder for context gesture 1141, discover friends or other desired subscribers gesture 1142, call gesture 1143, start message gesture 1144, view person's website gesture 1145, e.g., Facebook, MySpace, Twitter, add person to list gesture 1146, e.g., Friends, Family, Favorites, find out schedule availability gesture 1147, or friend or flirt gesture 1148, e.g., make a heart shape in 2-D with the device with respect to a person.
Once a place and associated events are selected, a user can gesture with the device in a variety of pre-defined or user defined ways per the following non-limiting examples: request for information about events at location/venue gesture 1241, show “Show Times/Dates” gesture 1242, purchase tickets gesture 1243, find out hours gesture 1244, watch promotional content 1245, e.g., Trailers, Video, Photos, etc., feedback to owner gesture 1246, e.g., digg thumbs up, or down, or a request for website gesture 1247.
Once a property or properties are selected at 1330, a user can gesture with the device in a variety of pre-defined or user defined ways per the following non-limiting examples: agent contact info gesture 1341, e.g., with automatic links for immediate contact, multiple listing service (MLS) listing for house gesture 1342, request open house info gesture 1343, find out price gesture 1344, watch promotional content 1345, e.g., trailers, video, photos, etc., give feedback to owner gesture 1346, e.g., digg thumbs up, or down, find out vendor info gesture 1347, e.g., find out who does the landscaping, or a request for historical/projected data gesture 1348.
For instance, a view manual user interface 1440 can be invoked by performing a view help gesture 1430. View manual user interface 1440 may include product (POI) help index 1422, and correspondingly include the ability to search 144, link to supplemental help 1446, or link to repair or service for the item, as well as display a given section of the help information in section 1450.
In the present non-limiting implementation, a user can add an item by geoplanting it and associating with a list 1630 and invoke geoplant and list user interface 1640, with list information 1642, distance from geoplant 1644, geoplant information 1646, e.g., store name and information, directions 1648, or hours 1650, e.g., status “open for 2 more hours” or “closed 10 minutes ago.”
With respect to the list, an operation might be to see the list UI 1730, thereby bringing up grocery list user interface 1740 including a list of items 1742, e.g., can check of items as acquired, ranking/filter UI 1744, add/manage/sort items 1746, geoplant list 1748, share list 1750, or purchase item(s) online 1752. Another operation with respect to groceries might be to see favorites 1732 via favorite groceries UI 1760, which may include a list of favorite items 1762, e.g., can check items as acquired, ranking/filter UI 1764, add/manage/sort items 1766, put on “Hunt for” list 1768 (described in more detail below), share list 1770 or view online content relating to item(s) 1772, e.g., ads.
To learn more about an item on the hunt list 1820, a user can select an item 1830 and invoke item-level hunt user interface 1840, which can include item information 1842, nearest store information 1844, hours information 184, related deals 1848, directions to store(s) 1850, or enable purchase of item(s) online 1852.
At 2420, information about a selected point of interest is displayed as overlay over the image. At 2430, an action is requested with respect to the selected place or item, e.g., show information, directions, etc. For example, a user may wish to review the item or add to wikipedia knowledge about point of interest, e.g., upload information, images, etc. In this regard, because it is intuitive to give a 3-D perspective view when the viewing plane is orthogonal to the ground plane, in the present embodiment, a 3-D perspective view with POI information overlay is implemented when the device is held substantially in the vertical plane. In effect, the camera shows the real space behind the device, and indications of points of interest in that space as if the user was performing a scan of his or her surroundings with the device. Direction information of the device 2600 enables data and network services to know what the scope of objects for interaction with the device is.
At 2710, path information representing a path traversed by the portable device, is determined based on direction information or motion information defining the path information, to determine at least one gesture made by the portable device. Any of tilt, spin, rotation along any axes, translation, etc. can be taken into account for the path information when determining if a gesture has been made. In one non-limiting embodiment, the path information is represented as a set of vectors varying over time. A user can indicate that a gesture is about to be made, and when it will stop, by explicitly or implicitly defining a start of a gesture or an end of a gesture. This input for starting and stopping a gesture can itself be a gesture for starting or a gesture for stopping. In the same way, a user can define a gesture, but performing start and stop operations for defining the gesture, and the path information between start and stop represents the user-defined gesture.
At 2720, direction information, representing an orientation of the portable device, and the location information are processed to determine a point of interest to which the at least one gesture applies. At 2730, a request is transmitting, e.g., automatically, to a network service based on the at least one gesture and the point of interest.
Next, to navigate to the POI or nearest POI in a given category 3160, the image input 3112 includes direction information 3130 that tells the user where to point the device 3100. In this case, the directions indicate to swivel the angle of the device or translate the device to the right to see more imagery 3112 to the right. Since the user still cannot see the target POI in this example, the user again swivels the device to the right resulting in image input 3114 that includes the target POI 3140, and the direction of the user's outstretched arm points exactly to where the user wants to go. An intelligent algorithm can also be applied to directions taking into account that humans cannot yet walk through walls, fences, etc., or that cars by law must stay on streets, etc. In this way, a POI can be located and highlighted 3170, from which further operations can be defined. For instance, the POI can be selected, or otherwise acted upon consistent with any of the other embodiments described herein.
A representative interaction with a pointing device as provided in one or more embodiments herein is illustrated in
At 3330, some action enables an explicit and/or implicit selection of an item of interest within scope. Then, any of a great variety of services can be performed with respect to any point of interest selected by the user via a user interface. In one aspect, at 3340, interested parties can advertise based on the selection of the items of interest and parties can be compensated according to agreed upon advertising models.
For instance, in an optional Quick Response (QR) support embodiment, decompression allows users to take pictures of a QR code and process its contents where information has been encoded into a sticker/printout for display outside of a business (e.g., in the form of a copyrighted URL). The code need not be a QR code, but could be any code that can be read or scanned or processed to determine its underlying content. For instance, with a visual representation, a picture can be taken and processed, or with the bar code, the device can scan it. RF identification technology could also be used. For the avoidance of doubt, any encoded image format can be used, like a bar code, only one example of which is a QR code.
In effect, this enables a query for POI information via a QR code or other encoding. The user scans or images the code with a device 3630, and then transmits the code to the service, which translates the code into static and dynamically updated user information for display as a UI 3600 (or other user interface representation) so that the user can query about a POI merely by pointing at it. A URL for the POI can also be encoded in a format such as a QR code. In one non-limiting embodiment, the user can point the device at a QR code, and decode a given image with the QR code.
Supplemental Context Regarding Pointing Devices, Architectures and Services
The following description contains supplemental context regarding potential non-limiting pointing devices, architectures and associated services to further aid in understanding one or more of the above embodiments. Any one or more of any additional features described in this section can be accommodated in any one or more of the embodiments described above with respect to direction based services at a particular location. While such combinations of embodiments or features are possible, for the avoidance of doubt, no embodiments set forth in the subject disclosure should be considered limiting on any other embodiments described herein.
As mentioned, a broad range of scenarios can be enabled by a device that can take location and direction information about the device and build a service on top of that information. For example, by effectively using an accelerometer in coordination with an on board digital compass, an application running on a mobile device updates what each endpoint is “looking at” or pointed towards, attempting hit detection on potential points of interest to either produce real-time information for the device or to allow the user to select a range, or using the GPS, a location on a map, and set information such as “Starbucks—10% off cappuccinos today” or “The Alamo—site of . . . ” for others to discover. One or more accelerometers can also be used to perform the function of determining direction information for each endpoint as well. As described herein, these techniques can become more granular to particular items within a Starbucks, such as “blueberry cheesecake” on display in the counter, enabling a new type of sale opportunity.
Accordingly, a general device for accomplishing this includes a processing engine to resolve a line of sight vector sent from a mobile endpoint and a system to aggregate that data as a platform, enabling a host of new scenarios predicated on the pointing information known for the device. The act of pointing with a device, such as the user's mobile phone, thus becomes a powerful vehicle for users to discover and interact with points of interest around the individual in a way that is tailored for the individual. Synchronization of data can also be performed to facilitate roaming and sharing of POV data and contacts among different users of the same service.
In a variety of embodiments described herein, 2-dimensional (2D), 3-dimensional (3D) or N-dimensional directional-based search, discovery, and interactivity services are enabled for endpoints in the system of potential interest to the user.
In this regard, the pointing information and corresponding algorithms ultimately depend upon the assets available in a device for producing the pointing or directional information. The pointing information, however produced according to an underlying set of measurement components, and interpreted by a processing engine, can be one or more vectors. A vector or set of vectors can have a “width” or “arc” associated with the vector for any margin of error associated with the pointing of the device. A panning angle can be defined by a user with at least two pointing actions to encompass a set of points of interest, e.g., those that span a certain angle defined by a panning gesture by the user.
In this respect, a device can include a variety of spatial and map components and intelligence to determine intersections for directional arcs. For instance, objects of interest could be represented with exact boundaries, approximated with spheres, subshells (stores in a mall) of a greater shell (mall), hierarchically arranged, etc. Dynamically generated bounding boxes can also be implemented work, i.e., any technique can be used to obtain boundary information for use in an intersection algorithm. Thus, such boundaries can be implicitly or explicitly defined for the POIs.
Thus, a device can include an intersection component that interprets pointing information relative to a set of potential points of interest. The engine can perform such intersections knowing what the resolution of the measuring instruments are on the device, such as a given resolution of a GPS system. Such techniques can include taking into account how far a user is from a plane of objects of interest, such as items on a shelf or wall, the size of the item of interest and how that is defined, as well as the resolution of location instrumentation, such as the GPS system. The device can also optionally include an altimeter, or any other device that gives altitude information, such as measuring radar or sonar bounce from the floor. The altitude information can supplement existing location information for certain specialized services where points of interest vary significantly at different altitudes. It is noted that GPS itself has some information about altitude in its encoding.
In one non-limiting embodiment, a portable electronic device includes a positional component for receiving positional information as a function of a location of the portable electronic device, a directional component that outputs direction information as a function of an orientation of the portable electronic device and a location based engine that processes the positional information and the direction information to determine a subset of points of interest relative to the portable electronic device as a function of at least the positional information and the direction information.
The positional component can be a positional GPS component for receiving GPS data as the positional information. The directional component can be a magnetic compass and/or a gyroscopic compass that outputs the direction information. The device can include acceleration component(s), such as accelerometer(s), that outputs acceleration information associated with movement of the portable electronic device. The use of a separate sensor can also be used to further compensate for tilt and altitude adjustment calculations.
In one embodiment, the device includes a cache memory for dynamically storing a subset of endpoints of interest that are relevant to the portable electronic device and at least one interface to a network service for transmitting the positional information and the direction information to the network service. In return, based on real-time changes to the positional information and direction/pointing information, the device dynamically receives in the cache memory an updated subset of endpoints that are potentially relevant to the portable electronic device.
For instance, the subset of endpoints can be updated as a function of endpoints of interest within a pre-defined distance substantially along a vector defined by the orientation of the portable electronic device. Alternatively or in addition, the subset of endpoints can be updated as a function of endpoints of interest relevant to a current context of the portable electronic device. In this regard, the device can include a set of Representational State Transfer (REST)-based application programming interfaces (APIs), or other stateless set of APIs, so that the device can communicate with the service over different networks, e.g., Wi-Fi, a GPRS network, etc. or communicate with other users of the service, e.g., Bluetooth. For the avoidance of doubt, the embodiments are in no way limited to a REST based implementation, but rather any other state or stateful protocol could be used to obtain information from the service to the devices.
The directional component outputs direction information including compass information based on calibrated and compensated heading/directionality information. The directional component can also include direction information indicating upward or downward tilt information associated with a current upward or downward tilt of the portable electronic device, so that the services can detect when a user is pointing upwards or downwards with the device in addition to a certain direction. The height of the vectors itself can also be taken into account to distinguish between an event of pointing with a device from the top of a building (likely pointing to other buildings, bridges, landmarks, etc.) and the same event from the bottom of the building (likely pointing to a shop at ground level), or towards a ceiling or floor to differentiate among shelves in a supermarket. A 3-axis magnetic field sensor can also be used to implement a compass to obtain tilt readings.
Secondary sensors, such as altimeters or pressure readers, can also be included in a mobile device and used to detect a height of the device, e.g., what floor a device is on in a parking lot or floor of a department store (changing the associated map/floorplan data). Where a device includes a compass with a planar view of the world (e.g., 2-axis compass), the inclusion of one or more accelerometers in the device can be used to supplement the motion vector measured for a device as a virtual third component of the motion vector, e.g., to provide measurements regarding a third degree of freedom. This option may be deployed where the provision of a 3-axis compass is too expensive, or otherwise unavailable.
In this respect, a gesturing component can also be included in the device to determine a current gesture of a user of the portable electronic device from a set of pre-defined gestures. For example, gestures can include zoom in, zoom out, panning to define an arc, all to help filter over potential subsets of points of interest for the user.
For instance, web services can effectively resolve vector coordinates sent from mobile endpoints into <x,y,z> or other coordinates using location data, such as GPS data, as well as configurable, synchronized POV information similar to that found in a GPS system in an automobile. In this regard, any of the embodiments can be applied similarly in any motor vehicle device. One non-limiting use is also facilitation of endpoint discovery for synchronization of data of interest to or from the user from or to the endpoint.
Among other algorithms for interpreting position/motion/direction information, as shown in
In addition, a device 3900 includes an algorithm for discerning items substantially along a direction at which the device is pointing, and those not substantially along a direction at which the device is pointing. In this respect, while motion vector 3904 might implicate POI 3912, without a specific panning gesture that encompassed more directions/vectors, POIs 3914 and 3916 would likely not be within the scope of points of interest defined by motion vector 3904. The distance or reach of a vector can also be tuned by a user, e.g., via a slider control or other control, to quickly expand or contract the scope of endpoints encompassed by a given “pointing” interaction with the device.
In one non-limiting embodiment, the determination of at what or whom the user is pointing is performed by calculating an absolute “Look” vector, within a suitable margin of error, by a reading from an accelerometer's tilt and a reading from the magnetic compass. Then, an intersection of endpoints determines an initial scope, which can be further refined depending on the particular service employed, i.e., any additional filter. For instance, for an apartment search service, endpoints falling within the look vector that are not apartments ready for lease, can be pre-filtered.
In addition to the look vector determination, the engine can also compensate for, or begin the look vector, where the user is by establish positioning (˜15 feet) through an A-GPS stack (or other location based or GPS subsystem including those with assistance strategies) and also compensate for any significant movement/acceleration of the device, where such information is available.
As mentioned, in another aspect, a device can include a client side cache of potentially relevant points of interest, which, based on the user's movement history can be dynamically updated. The context, such as geography, speed, etc. of the user can be factored in when updating. For instance, if a user's velocity is 2 miles an hour, the user may be walking and interested in updates at a city block by city block level, or at a lower level granularity if they are walking in the countryside. Similarly, if a user is moving on a highway at 60 miles per hour, the block-by-block updates of information are no longer desirable, but rather a granularity can be provided and predictively cached on the device that makes sense for the speed of the vehicle.
In an automobile context, the location becomes the road on which the automobile is travelling, and the particular items are the places and things that are passed on the roadside much like products in a particular retail store on a shelf or in a display. The pointing based services thus creates a virtual “billboard” opportunity for items of interest generally along a user's automobile path. Proximity to location can lead to an impulse buy, e.g., a user might stop by a museum they are passing and pointing at with their device, if offered a discount on admission.
In various alternative embodiments, gyroscopic or magnetic compasses can provide directional information. A REST based architecture enables data communications to occur over different networks, such as Wi-Fi and GPRS architectures. REST based APIs can be used, though any stateless messaging can be used that does not require a long keep alive for communicated data/messages. This way, since networks can go down with GPRS antennae, seamless switching can occur to Wi-Fi or Bluetooth networks to continue according to the pointing based services enabled by the embodiments described herein.
A device as provided herein according to one or more embodiments can include a file system to interact with a local cache, store updates for synchronization to the service, exchange information by Bluetooth with other users of the service, etc. Accordingly, operating from a local cache, at least the data in the local cache is still relevant at a time of disconnection, and thus, the user can still interact with the data. Finally, the device can synchronize according to any updates made at a time of re-connection to a network, or to another device that has more up to date GPS data, POI data, etc. In this regard, a switching architecture can be adopted for the device to perform a quick transition from connectivity from one networked system (e.g., cell phone towers) to another computer network (e.g., Wi-Fi) to a local network (e.g., mesh network of Bluetooth connected devices).
With respect to user input, a set of soft keys, touch keys, etc. can be provided to facilitate in the directional-based pointing services provided herein. A device can include a windowing stack in order to overlay different windows, or provide different windows of information regarding a point of interest (e.g., hours and phone number window versus interactive customer feedback window). Audio can be rendered or handled as input by the device. For instance, voice input can be handled by the service to explicitly point without the need for a physical movement of the device. For instance, a user could say into a device “what is this product right in front of me? No, not that one, the one above it” and have the device transmit current direction/movement information to a service, which in turn intelligently, or iteratively, determines what particular item of interest the user is pointing at, and returns a host of relevant information about the item.
One non-limiting way for determining a set of points of interest is illustrated in
Other gestures that can be of interest in for a gesturing subsystem include recognizing a user's gesture for zoom in or zoom out. Zoom in/zoom out can be done in terms of distance like
For another non-limiting example, with location information and direction information, a user can input a first direction via a click, and then a second direction after moving the device via a second click, which in effect defines an arc 4210 for objects of interest in the system, such as objects 4230, 4232, 4234 as illustrated in
Also, instead of focusing on real distance, zooming in or out could also represent a change in terms of granularity, or size, or hierarchy of objects. For example, a first pointing gesture with the device may result in a shopping mall appearing, but with another gesture, a user could carry out a recognizable gesture to gain or lose a level of hierarchical granularity with the points of interest on display. For instance, after such gesture, the points of interest could be zoomed in to the level of the stores at the shopping mall and what they are currently offering.
In addition, a variety of even richer behaviors and gestures can be recognized when acceleration of the device in various axes can be discerned. Panning, arm extension/retraction, swirling of the device, backhand tennis swings, breaststroke arm action, golf swing motions could all signify something unique in terms of the behavior of the pointing device, and this is to just name a few motions that could be implemented in practice. Thus, any of the embodiments herein can define a set of gestures that serve to help the user interact with a set of services built on the pointing platform, to help users easily gain information about points of information in their environment.
Furthermore, with relatively accurate upward and downward tilt of the device, in addition to directional information such as calibrated and compensated heading/directional information, other services can be enabled. Typically, if a device is ground level, the user is outside, and the device is “pointed” up towards the top of buildings, the granularity of information about points of interest sought by the user (building level) is different than if the user was pointing at the first floor shops of the building (shops level), even where the same compass direction is implicated. Similarly, where a user is at the top of a landmark such as the Empire State building, a downward tilt at the street level (street level granularity) would implicate information about different points of interest that if the user of the device pointed with relatively no tilt at the Statue of Liberty (landmark/building level of granularity).
Also, when a device is moving in a car, it may appear that direction is changing as the user maintains a pointing action on a single location, but the user is still pointing at the same thing due to displacement. Thus, thus time varying location can be factored into the mathematics and engine of resolving at what the user is pointing with the device to compensate for the user experience based upon which all items are relative.
Accordingly, armed with the device's position, one or more web or cloud services can analyze the vector information to determine at what or whom the user is looking/pointing. The service can then provide additional information such as ads, specials, updates, menus, happy hour choices, etc., depending on the endpoint selected, the context of the service, the location (urban or rural), the time (night or day), etc. As a result, instead of a blank contextless Internet search, a form of real-time visual search for users in real 3-D environments is provided.
In one non-limiting embodiment, the direction based pointing services are implemented in connection with a pair of glasses, headband, etc. having a corresponding display means that acts in concert with the user's looking to highlight or overlay features of interest around the user.
As shown in
For instance, a set of different choices are shown in
When things change from the perspective of either the service or the client, a synchronization process can bring either the client or service, respectively, up to date. In this way, an ecosystem is enabled where a user can point at an object or point of interest, gain information about it that is likely to be relevant to the user, interact with the information concerning the point of interest, and add value to services ecosystem where the user interacts. The system thus advantageously supports both static and dynamic content.
Other user interfaces can be considered such as left-right, or up-down arrangements for navigating categories or a special set of soft-keys can be adaptively provided.
Where a device includes a camera, in one embodiment shown in
With respect to a representative set of user settings, a number or maximum number of desired endpoints delivered as results can be configured. How to filter can also be configured, e.g., 5 most likely, 5 closest, 5 closest to 100 feet away, 5 within category or subcategory, alphabetical order, etc. In each case, based on a pointing direction, implicitly a cone or other cross section across physical space is defined as a scope of possible points of interest. In this regard, the width or deepness of this cone or cross section can be configurable by the user to control the accuracy of the pointing, e.g., narrow or wide radius of points and how far out to search.
To support processing of vector information and aggregating POI databases from third parties, a variety of storage techniques, such as relational storage techniques can be used. For instance, Virtual Earth data can be used for mapping and aggregation of POI data can occur from third parties such as Tele Atlas, NavTeq, etc. In this regard, businesses not in the POI database will want to be discovered and thus, the service provides a similar, but far superior from a spatial relevance standpoint, Yellow Pages experiences where businesses will desire to have their additional information, such as menus, price sheets, coupons, pictures, virtual tours, etc. accessible via the system.
In addition, a synchronization platform or framework can keep the roaming caches in sync, thereby capturing what users are looking at and efficiently processing changes. Or, where a user goes offline, local changes can be recorded, and when the user goes back online, such local changes can be synchronized to the network or service store. Also, since the users are in effect pulling information they care about in the here and in the now through the act of pointing with the device, the system generates high cost per thousand impression (CPM) rates as compared to other forms of demographic targeting. Moreover, the system drives impulse buys, since the user may not be physically present in a store, but the user may be near the object, and by being nearby and pointing at the store, information about a sale concerning the object can be sent to the user.
As mentioned, different location subsystems, such as tower triangulation, GPS, A-GPS, E-GPS, etc. have different tolerances. For instance, with GPS, tolerances can be achieved to about 10 meters. With A-GPS, tolerances can be tightened to about 12 feet. In turn, with E-GPS, tolerance may be a different error margin still. Compensating for the different tolerances is part of the interpretation engine for determining intersection of a pointing vector and a set of points of interest. In addition, as shown in
In this regard, the various embodiments described herein can employ any algorithm for distinguishing among boundaries of the endpoints, such as boundary boxes, or rectangles, triangles, circles, etc. As a default radius, e.g., 150 feet could be selected, and such value can be configured or be context sensitive to the service provided. On-line real estate sites can be leveraged for existing POI information. Since different POI databases may track different information at different granularities, a way of normalizing the POI data according to one convention or standard can also be implemented so that the residential real estate location data of Zillow can be integrated with GPS information from Starbucks of all the Starbucks by country.
In addition, similar techniques can be implemented in a moving vehicle client that includes GPS, compass, accelerometer, etc. By filtering based on scenarios (e.g., I need gas), different subsets of points of interest (e.g., gas stations) can be determined for the user based not only on distance, but actual time it may take to get to the point of interest. In this regard, while a gas station may be 100 yards to the right off the highway, the car may have already passed the corresponding exit, and thus more useful information to provide is what gas station will take the least amount of time to drive from a current location based on direction/location so as to provide predictive points of interest that are up ahead on the road, and not already aged points of interest that would require turning around from one's destination in order to get to them.
For existing motor vehicle navigation devices, or other conventional portable GPS navigation devices, where a device does not natively include directional means such as a compass, the device can have an extension slot that accommodates direction information from an external directional device, such as a compass. Similarly, for laptops or other portable electronic devices, such devices can be outfitted with a card or board with a slot for a compass. While any of the services described herein can make web service calls as part of the pointing and retrieval of endpoint process, as mentioned, one advantageous feature of a user's locality in real space is that it is inherently more limited than a general Internet search for information. As a result, a limited amount of data can be predictively maintained on a user's device in cache memory and properly aged out as data becomes stale.
While there are a variety of implementations, and ways to sub-divide regions, whether overlapping or not, predictive caching and aging 4600 is conceptually illustrated by
Accordingly, using the regional data cache, callbacks and an update mechanism that is updated dynamically based on movement, new point of interest can be added by a service or by a user. Update is thus performed continuously or substantially continuously based on updated travel, velocity, speed, etc. In this regard, a user can add a new point of interest in the region, add info to a local cache, and then upload to the zone. To appreciate the problem, the number of worldwide POIs is practically limitless, however only a small number of POIs will be relevant to a user at a given time. Thus, predictively, a cube of data can be taken to the device, the user can go offline such that when the user reconnects, the device is intelligent to figure out what has changed, been weighted, etc., so that the device can synchronize with the network services and expose the user's changes for other people.
The predictive algorithms again depend on what the user is interested in finding, what service the user may be using, the context of the user, etc. They can also be based on velocity, direction, time, etc. For instance, if it is nighttime, assumptions based on demographics or preferences may lead the device to return information about nightclubs or all night diners. Or, instead of giving directions as driving directions that calculate distances as absolute distances, i.e., as the crow flies, a device can take road curves into account since instantaneous pointing information on roads can be collected and handled by a corresponding service when giving driving directions. Or, as another alternative, the direction one is heading on a road, such as a highway with a concrete divider, is relevant to the directions that a navigation system should give. Where a U-turn is unavailable and user passes an exit with a point of interest, for instance, directions should take this into account and consider the heading of the vehicle.
Any device can include the embodiments described herein, including MP3 players, such as a Zune device, GPS navigation devices, bike computers, sunglass/visor systems, motor vehicles, mobile phones, laptops, PDA, etc.
One way to obtain the service applications, assuming the underlying measuring instruments to participate in the real-time gathering of directional information, is to message to a service to obtain the application, e.g., by text messaging to service, or getting a client download link. Another vehicle for enabling the service is to provide it natively in the operating system or applications of a mobile devices. Since a hardware abstraction layer accommodates different methods for collecting position, direction, acceleration information, the same platform can be used on any device regardless of the precise underlying hardware.
In another aspect of any of the embodiments described herein, because stateless messaging is employed, if communications drop with one network, the device can begin further communicating via another network. For instance, a device has two channels, and a user gets on a bus, but no longer have GPRS or GPS activity. Nonetheless the user is able to get the information the device needs from some other channel. Just because a tower, or satellites are down, does not mean that the device cannot connect through an alternative channel, e.g., the bus's GPS location information via Bluetooth.
With respect to exemplary mobile client architectures, a representative device can include, as described variously herein, client Side Storage for housing and providing fast access to cached POI data in the current region including associated dynamically updated or static information, such as annotations, coupons from businesses, etc. This includes usage data tracking and storage. In addition, regional data can be a cached subset of the larger service data, always updated based on the region in which the client is roaming. For instance, POI data could include as a non-limiting example, the following information:
POI coordinates and data //{−70.26322, 43.65412, “STARBUCK'S”}
Localized annotations //Menu, prices, hours of operation, etc
Coupons and ads //Classes of coupons (new user, returning, etc)
Support for different kinds of information (e.g., blob v structured information (blob for storage and media; structured for tags, annotations, etc.)
A device can also include usage data and preferences to hold settings as well as usage data such as coupons “activated,” waypoints, businesses encountered per day, other users encountered, etc. to be analyzed by the cloud services for business intelligence analysis and reporting.
A device can also include a continuous update mechanism, which is a service that maintains the client's cached copy of a current region updated with the latest. Among other ways, this can be achieved with a ping-to-pull model that pre-fetches and swaps out the client's cached region using travel direction and speed to facilitate roaming among different regions. This is effectively a paging mechanism for upcoming POIs. This also includes sending a new or modified POI for the region (with annotations+coupons), sending a new or modified annotation for the POIs (with coupons), or sending a new or modified coupon for the POI.
A device can also include a Hardware Abstraction Layer (HAL) having components responsible for abstracting the way the client communicates with the measuring instruments, e.g., the GPS driver for positioning and LOS accuracy (e.g., open eGPS), magnetic compass for heading and rotational information (e.g., gyroscopic), one or more accelerometers for gestured input and tilt (achieves 3D positional algorithms, assuming gyroscopic compass).
As described earlier, a device can also include methods/interfaces to make REST calls via GPRS/Wi-Fi and a file system and storage for storing and retrieving the application data and settings.
A device can also include user input and methods to map input to the virtual keys. For instance, one non-limiting way to accomplish user input is to have softkeys as follows, though it is to be understood a great variety of user inputs can be used to achieve interaction with the user interfaces of the pointing based services.
SK up/down: //Up and down on choices
SK right, SK ok/confirm: //Choose an option or drill down/next page
SK left, SK cancel/back, //Go back to a previous window, cancel
Exit/Incoming Call events //Exit the app or minimize
In addition, a representative device can include a graphics and windowing stack to render the client side UI, as well as an audio stack to play sounds/alerts.
As mentioned, such a device may also include spatial and math computational components including a set of APIs to perform 3D collision testing between subdivided surfaces such as spherical shells (e.g., a simple hit testing model to adopt and boundary definitions for POIs), rotate points, and cull as appropriate from conic sections.
As described in various embodiments herein,
At 4820, upon selection of a POI, static content is determined and any dynamic content is acquired via synchronization. When new data becomes available, it is downloaded to stay up to date. At 4830, POI information is filtered further by user specific information (e.g., if it is the user's first time at the store, returning customer, loyalty program member, live baseball game offer for team clothing discounts, etc.). At 4840, static and dynamic content that is up to date is rendered for the POI. In addition, updates and/or interaction with POI information is allowed which can be synced back to the service.
Exemplary Networked and Distributed Environments
One of ordinary skill in the art can appreciate that the various embodiments of methods and devices for pointing based services and related embodiments described herein can be implemented in connection with any computer or other client or server device, which can be deployed as part of a computer network or in a distributed computing environment, and can be connected to any kind of data store. In this regard, the various embodiments described herein can be implemented in any computer system or environment having any number of memory or storage units, and any number of applications and processes occurring across any number of storage units. This includes, but is not limited to, an environment with server computers and client computers deployed in a network environment or a distributed computing environment, having remote or local storage.
Each object 4910, 4912, etc. and computing objects or devices 4920, 4922, 4924, 4926, 4928, etc. can communicate with one or more other objects 4910, 4912, etc. and computing objects or devices 4920, 4922, 4924, 4926, 4928, etc. by way of the communications network 4940, either directly or indirectly. Even though illustrated as a single element in
There are a variety of systems, components, and network configurations that support distributed computing environments. For example, computing systems can be connected together by wired or wireless systems, by local networks or widely distributed networks. Currently, many networks are coupled to the Internet, which provides an infrastructure for widely distributed computing and encompasses many different networks, though any network infrastructure can be used for exemplary communications made incident to the techniques as described in various embodiments.
Thus, a host of network topologies and network infrastructures, such as client/server, peer-to-peer, or hybrid architectures, can be utilized. In a client/server architecture, particularly a networked system, a client is usually a computer that accesses shared network resources provided by another computer, e.g., a server. In the illustration of
A server is typically a remote computer system accessible over a remote or local network, such as the Internet or wireless network infrastructures. The client process may be active in a first computer system, and the server process may be active in a second computer system, communicating with one another over a communications medium, thus providing distributed functionality and allowing multiple clients to take advantage of the information-gathering capabilities of the server. Any software objects utilized pursuant to the user profiling can be provided standalone, or distributed across multiple computing devices or objects.
In a network environment in which the communications network/bus 4940 is the Internet, for example, the servers 4910, 4912, etc. can be Web servers with which the clients 4920, 4922, 4924, 4926, 4928, etc. communicate via any of a number of known protocols, such as the hypertext transfer protocol (HTTP). Servers 4910, 4912, etc. may also serve as clients 4920, 4922, 4924, 4926, 4928, etc., as may be characteristic of a distributed computing environment.
Exemplary Computing Device
As mentioned, various embodiments described herein apply to any device wherein it may be desirable to perform pointing based services. It should be understood, therefore, that handheld, portable and other computing devices and computing objects of all kinds are contemplated for use in connection with the various embodiments described herein, i.e., anywhere that a device may request pointing based services. Accordingly, the below general purpose remote computer described below in
Although not required, any of the embodiments can partly be implemented via an operating system, for use by a developer of services for a device or object, and/or included within application software that operates in connection with the operable component(s). Software may be described in the general context of computer executable instructions, such as program modules, being executed by one or more computers, such as client workstations, servers or other devices. Those skilled in the art will appreciate that network interactions may be practiced with a variety of computer system configurations and protocols.
With reference to
Computer 5010 typically includes a variety of computer readable media and can be any available media that can be accessed by computer 5010. The system memory 5030 may include computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) and/or random access memory (RAM). By way of example, and not limitation, memory 5030 may also include an operating system, application programs, other program modules, and program data.
A user may enter commands and information into the computer 5010 through input devices 5040 A monitor or other type of display device is also connected to the system bus 5021 via an interface, such as output interface 5050. In addition to a monitor, computers may also include other peripheral output devices such as speakers and a printer, which may be connected through output interface 5050.
The computer 5010 may operate in a networked or distributed environment using logical connections to one or more other remote computers, such as remote computer 5070. The remote computer 5070 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, or any other remote media consumption or transmission device, and may include any or all of the elements described above relative to the computer 5010. The logical connections depicted in
As mentioned above, while exemplary embodiments have been described in connection with various computing devices, networks and advertising architectures, the underlying concepts may be applied to any network system and any computing device or system in which it is desirable to derive information about surrounding points of interest.
There are multiple ways of implementing one or more of the embodiments described herein, e.g., an appropriate API, tool kit, driver code, operating system, control, standalone or downloadable software object, etc. which enables applications and services to use the pointing based services. Embodiments may be contemplated from the standpoint of an API (or other software object), as well as from a software or hardware object that provides pointing platform services in accordance with one or more of the described embodiments. Various implementations and embodiments described herein may have aspects that are wholly in hardware, partly in hardware and partly in software, as well as in software.
The word “exemplary” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, for the avoidance of doubt, such terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
As mentioned, the various techniques described herein may be implemented in connection with hardware or software or, where appropriate, with a combination of both. As used herein, the terms “component,” “system” and the like are likewise intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on computer and the computer can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
The aforementioned systems have been described with respect to interaction between several components. It can be appreciated that such systems and components can include those components or specified sub-components, some of the specified components or sub-components, and/or additional components, and according to various permutations and combinations of the foregoing. Sub-components can also be implemented as components communicatively coupled to other components rather than included within parent components (hierarchical). Additionally, it should be noted that one or more components may be combined into a single component providing aggregate functionality or divided into several separate sub-components, and any one or more middle layers, such as a management layer, may be provided to communicatively couple to such sub-components in order to provide integrated functionality. Any components described herein may also interact with one or more other components not specifically described herein but generally known by those of skill in the art.
In view of the exemplary systems described supra, methodologies that may be implemented in accordance with the disclosed subject matter will be better appreciated with reference to the flowcharts of the various figures. While for purposes of simplicity of explanation, the methodologies are shown and described as a series of blocks, it is to be understood and appreciated that the claimed subject matter is not limited by the order of the blocks, as some blocks may occur in different orders and/or concurrently with other blocks from what is depicted and described herein. Where non-sequential, or branched, flow is illustrated via flowchart, it can be appreciated that various other branches, flow paths, and orders of the blocks, may be implemented which achieve the same or a similar result. Moreover, not all illustrated blocks may be required to implement the methodologies described hereinafter.
While the various embodiments have been described in connection with the preferred embodiments of the various figures, it is to be understood that other similar embodiments may be used or modifications and additions may be made to the described embodiment for performing the same function without deviating therefrom. Still further, one or more aspects of the above described embodiments may be implemented in or across a plurality of processing chips or devices, and storage may similarly be effected across a plurality of devices. Therefore, the present invention should not be limited to any single embodiment, but rather should be construed in breadth and scope in accordance with the appended claims.
This application is a continuation of U.S. patent application Ser. No. 14/453,312, filed on Aug. 6, 2014, entitled “DATA SERVICES BASED ON GESTURE AND LOCATION INFORMATION OF DEVICE,” which is a continuation of U.S. patent application Ser. No. 13/908,737, filed on Jun. 3, 2013, entitled “DATA SERVICES BASED ON GESTURE AND LOCATION INFORMATION OF DEVICE,”, now U.S. Pat. No. 8,868,374 which is a continuation of U.S. patent application Ser. No. 12/437,857, filed on May 8, 2009, entitled “DATA SERVICES BASED ON GESTURE AND LOCATION INFORMATION OF DEVICE,” now U.S. Pat. No. 8,467,991 which application claims the benefit of and priority to both U.S. Provisional Application Ser. No. 61/074,415, filed on Jun. 20, 2008, entitled “MOBILE COMPUTING SERVICES BASED ON DEVICES WITH DYNAMIC DIRECTION INFORMATION,” as well as U.S. Provisional Application Ser. No. 61/074,590, filed on Jun. 20, 2008, entitled “MOBILE COMPUTING SERVICES BASED ON DEVICES WITH DYNAMIC DIRECTION INFORMATION.” The contents of each of the foregoing applications are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4262199 | Bridges et al. | Apr 1981 | A |
4745545 | Schiffleger | May 1988 | A |
5424524 | Ruppert et al. | Jun 1995 | A |
5767795 | Schaphorst | Jun 1998 | A |
5781908 | Williams et al. | Jul 1998 | A |
5787262 | Shakib et al. | Jul 1998 | A |
5892900 | Ginter et al. | Apr 1999 | A |
5948040 | DeLorme et al. | Sep 1999 | A |
6084594 | Goto | Jul 2000 | A |
6133947 | Mikuni | Oct 2000 | A |
6141014 | Endo et al. | Oct 2000 | A |
6243076 | Hatfield | Jun 2001 | B1 |
6252544 | Hoffberg | Jun 2001 | B1 |
6317688 | Bruckner et al. | Nov 2001 | B1 |
6317754 | Peng | Nov 2001 | B1 |
6321158 | DeLorme et al. | Nov 2001 | B1 |
6327533 | Chou | Dec 2001 | B1 |
6332127 | Bandera et al. | Dec 2001 | B1 |
6353398 | Amin et al. | Mar 2002 | B1 |
6360167 | Millington et al. | Mar 2002 | B1 |
6369794 | Sakurai et al. | Apr 2002 | B1 |
6372974 | Gross et al. | Apr 2002 | B1 |
6374180 | Slominski et al. | Apr 2002 | B1 |
6381465 | Chern et al. | Apr 2002 | B1 |
6381603 | Chan et al. | Apr 2002 | B1 |
6421602 | Bullock et al. | Jul 2002 | B1 |
6452544 | Hakala et al. | Sep 2002 | B1 |
6466938 | Goldberg | Oct 2002 | B1 |
6470264 | Bide | Oct 2002 | B2 |
6526335 | Treyz et al. | Feb 2003 | B1 |
6529144 | Nilsen et al. | Mar 2003 | B1 |
6542818 | Oesterling | Apr 2003 | B1 |
6587835 | Treyz et al. | Jul 2003 | B1 |
6615246 | Pivowar et al. | Sep 2003 | B2 |
6636873 | Carini et al. | Oct 2003 | B1 |
6643669 | Novak et al. | Nov 2003 | B1 |
6661353 | Gopen | Dec 2003 | B1 |
6672506 | Swartz et al. | Jan 2004 | B2 |
6678882 | Hurley et al. | Jan 2004 | B1 |
6741188 | Miller et al. | May 2004 | B1 |
6763226 | McZeal, Jr. | Jul 2004 | B1 |
D494584 | Schlieffers et al. | Aug 2004 | S |
6771294 | Pulli et al. | Aug 2004 | B1 |
6795768 | Bragansa et al. | Sep 2004 | B2 |
6796505 | Pellaumail et al. | Sep 2004 | B2 |
6810405 | LaRue et al. | Oct 2004 | B1 |
6837436 | Swartz et al. | Jan 2005 | B2 |
6850837 | Paulauskas et al. | Feb 2005 | B2 |
6895503 | Tadayon et al. | May 2005 | B2 |
6898517 | Froeberg | May 2005 | B1 |
6912398 | Domnitz | Jun 2005 | B1 |
6930715 | Mower | Aug 2005 | B1 |
6983293 | Wang | Jan 2006 | B2 |
6992619 | Harrison | Jan 2006 | B2 |
7010501 | Roslak et al. | Mar 2006 | B1 |
7031875 | Ellenby et al. | Apr 2006 | B2 |
7032003 | Shi et al. | Apr 2006 | B1 |
7040541 | Swartz et al. | May 2006 | B2 |
7063263 | Swartz et al. | Jun 2006 | B2 |
7064706 | King et al. | Jun 2006 | B2 |
7082365 | Sheha et al. | Jul 2006 | B2 |
7092964 | Dougherty et al. | Aug 2006 | B1 |
7103365 | Myllymaki | Sep 2006 | B2 |
7103370 | Creemer | Sep 2006 | B1 |
7103844 | Jones et al. | Sep 2006 | B2 |
7107038 | Fitch et al. | Sep 2006 | B2 |
7133892 | Khan et al. | Nov 2006 | B2 |
7136945 | Gibbs et al. | Nov 2006 | B2 |
7142205 | Chithambaram et al. | Nov 2006 | B2 |
7171378 | Petrovich et al. | Jan 2007 | B2 |
7191218 | Innes | Mar 2007 | B1 |
7195157 | Swartz et al. | Mar 2007 | B2 |
7198192 | Page et al. | Apr 2007 | B2 |
7245923 | Frank et al. | Jul 2007 | B2 |
7321826 | Sheha et al. | Jan 2008 | B2 |
7340333 | Lenneman et al. | Mar 2008 | B2 |
7358985 | Uchihashi et al. | Apr 2008 | B2 |
7385501 | Miller et al. | Jun 2008 | B2 |
7389179 | Jin et al. | Jun 2008 | B2 |
7428418 | Cole et al. | Sep 2008 | B2 |
7441203 | Othmer et al. | Oct 2008 | B2 |
7460953 | Herbst et al. | Dec 2008 | B2 |
7501981 | Rahman et al. | Mar 2009 | B2 |
7587276 | Gold et al. | Sep 2009 | B2 |
7602944 | Campbell et al. | Oct 2009 | B2 |
7620404 | Chesnais et al. | Nov 2009 | B2 |
7620659 | Novik et al. | Nov 2009 | B2 |
7653576 | Boss et al. | Jan 2010 | B2 |
7720844 | Chu et al. | May 2010 | B2 |
7747528 | Robinson et al. | Jun 2010 | B1 |
7783523 | Lopez et al. | Aug 2010 | B2 |
7788032 | Moloney | Aug 2010 | B2 |
7801058 | Wang | Sep 2010 | B2 |
7844415 | Bryant et al. | Nov 2010 | B1 |
7920072 | Smith et al. | Apr 2011 | B2 |
7941269 | Laumeyer et al. | May 2011 | B2 |
7990394 | Vincent et al. | Aug 2011 | B2 |
8014763 | Hymes | Sep 2011 | B2 |
8023962 | Frank et al. | Sep 2011 | B2 |
8098894 | Soderstrom | Jan 2012 | B2 |
8099109 | Altman et al. | Jan 2012 | B2 |
8165034 | Buchwald et al. | Apr 2012 | B2 |
8170795 | Brulle-Drews et al. | May 2012 | B2 |
8200246 | Khosravy et al. | Jun 2012 | B2 |
8249949 | Nash | Aug 2012 | B2 |
8296061 | Nesbitt | Oct 2012 | B2 |
8407003 | Geelen et al. | Mar 2013 | B2 |
8447331 | Busch | May 2013 | B2 |
8615257 | Khosravy et al. | Dec 2013 | B2 |
8872767 | Khosravy et al. | Oct 2014 | B2 |
9200901 | Khosravy et al. | Dec 2015 | B2 |
9661468 | Khosravy et al. | May 2017 | B2 |
10057724 | Khosravy et al. | Aug 2018 | B2 |
20010030664 | Shulman et al. | Oct 2001 | A1 |
20010036224 | Demello et al. | Nov 2001 | A1 |
20010039546 | Moore et al. | Nov 2001 | A1 |
20020002504 | Engel et al. | Jan 2002 | A1 |
20020042750 | Morrison | Apr 2002 | A1 |
20020059256 | Halim et al. | May 2002 | A1 |
20020059266 | I'anson et al. | May 2002 | A1 |
20020077905 | Arndt et al. | Jun 2002 | A1 |
20020080167 | Andrews et al. | Jun 2002 | A1 |
20020091568 | Kraft et al. | Jul 2002 | A1 |
20020111873 | Ehrlich et al. | Aug 2002 | A1 |
20020124067 | Parupudi et al. | Sep 2002 | A1 |
20020138196 | Polidi et al. | Sep 2002 | A1 |
20020140745 | Ellenby et al. | Oct 2002 | A1 |
20020165771 | Walker et al. | Nov 2002 | A1 |
20020191034 | Sowizral et al. | Dec 2002 | A1 |
20030036848 | Sheha et al. | Feb 2003 | A1 |
20030046158 | Kratky | Mar 2003 | A1 |
20030046164 | Sato et al. | Mar 2003 | A1 |
20030061110 | Bodin | Mar 2003 | A1 |
20030069690 | Correia et al. | Apr 2003 | A1 |
20030069693 | Snapp et al. | Apr 2003 | A1 |
20030078002 | Sanjeev et al. | Apr 2003 | A1 |
20030100315 | Rankin | May 2003 | A1 |
20030101059 | Heyman | May 2003 | A1 |
20030142853 | Waehner et al. | Jul 2003 | A1 |
20030174838 | Bremer | Sep 2003 | A1 |
20030182319 | Morrison | Sep 2003 | A1 |
20030195851 | Ong | Oct 2003 | A1 |
20030208315 | Mays | Nov 2003 | A1 |
20030220966 | Hepper et al. | Nov 2003 | A1 |
20040024727 | Bowman | Feb 2004 | A1 |
20040032410 | Ryan | Feb 2004 | A1 |
20040070602 | Kobuya et al. | Apr 2004 | A1 |
20040107072 | Dietrich et al. | Jun 2004 | A1 |
20040122870 | Park et al. | Jun 2004 | A1 |
20040128324 | Sheynman et al. | Jul 2004 | A1 |
20040128499 | Peterka et al. | Jul 2004 | A1 |
20040130524 | Matsui | Jul 2004 | A1 |
20040147329 | Meadows et al. | Jul 2004 | A1 |
20040153473 | Hutchinson et al. | Aug 2004 | A1 |
20040201500 | Miller et al. | Oct 2004 | A1 |
20040203863 | Huomo | Oct 2004 | A1 |
20040214550 | Jenkins | Oct 2004 | A1 |
20040236500 | Choi et al. | Nov 2004 | A1 |
20040259573 | Cheng | Dec 2004 | A1 |
20040260464 | Wong | Dec 2004 | A1 |
20050015436 | Singh et al. | Jan 2005 | A1 |
20050027755 | Shah et al. | Feb 2005 | A1 |
20050044187 | Jhaveri et al. | Feb 2005 | A1 |
20050049993 | Nori et al. | Mar 2005 | A1 |
20050063563 | Soliman | Mar 2005 | A1 |
20050071280 | Irwin et al. | Mar 2005 | A1 |
20050160014 | Moss et al. | Jul 2005 | A1 |
20050172261 | Yuknewicz et al. | Aug 2005 | A1 |
20050172296 | Schleifer et al. | Aug 2005 | A1 |
20050174324 | Liberty et al. | Aug 2005 | A1 |
20050203905 | Jung et al. | Sep 2005 | A1 |
20050212753 | Marvit et al. | Sep 2005 | A1 |
20050212760 | Marvit et al. | Sep 2005 | A1 |
20050223047 | Shah et al. | Oct 2005 | A1 |
20050235018 | Tsinman et al. | Oct 2005 | A1 |
20050240591 | Marceau et al. | Oct 2005 | A1 |
20050243061 | Liberty et al. | Nov 2005 | A1 |
20050256782 | Sands et al. | Nov 2005 | A1 |
20050266858 | Miller et al. | Dec 2005 | A1 |
20050272442 | Miller et al. | Dec 2005 | A1 |
20060004713 | Korte et al. | Jan 2006 | A1 |
20060019676 | Miller et al. | Jan 2006 | A1 |
20060041663 | Brown et al. | Feb 2006 | A1 |
20060047776 | Chieng et al. | Mar 2006 | A1 |
20060058041 | Cheng | Mar 2006 | A1 |
20060061551 | Fateh | Mar 2006 | A1 |
20060064346 | Steenstra et al. | Mar 2006 | A1 |
20060069798 | Li et al. | Mar 2006 | A1 |
20060106879 | Zondervan et al. | May 2006 | A1 |
20060106881 | Leung et al. | May 2006 | A1 |
20060107330 | Ben-Yaacov et al. | May 2006 | A1 |
20060122035 | Felix et al. | Jun 2006 | A1 |
20060123010 | Landry et al. | Jun 2006 | A1 |
20060123053 | Scannell | Jun 2006 | A1 |
20060155778 | Sharma et al. | Jul 2006 | A1 |
20060158310 | Klatsmanyi et al. | Jul 2006 | A1 |
20060161379 | Ellenby et al. | Jul 2006 | A1 |
20060161516 | Clarke et al. | Jul 2006 | A1 |
20060176516 | Rothschild | Aug 2006 | A1 |
20060190497 | Inturi et al. | Aug 2006 | A1 |
20060190572 | Novik et al. | Aug 2006 | A1 |
20060194596 | Deng | Aug 2006 | A1 |
20060199536 | Eisenbach | Sep 2006 | A1 |
20060215569 | Khosravy et al. | Sep 2006 | A1 |
20060223518 | Haney | Oct 2006 | A1 |
20060229807 | Sheha et al. | Oct 2006 | A1 |
20060253392 | Davies | Nov 2006 | A1 |
20060256007 | Rosenberg | Nov 2006 | A1 |
20060256008 | Rosenberg | Nov 2006 | A1 |
20060259574 | Rosenberg | Nov 2006 | A1 |
20060270421 | Phillips et al. | Nov 2006 | A1 |
20060271286 | Rosenberg | Nov 2006 | A1 |
20060288053 | Holt et al. | Dec 2006 | A1 |
20060288344 | Brodersen et al. | Dec 2006 | A1 |
20060291482 | Evans | Dec 2006 | A1 |
20070004451 | C. Anderson | Jan 2007 | A1 |
20070005243 | Horvitz et al. | Jan 2007 | A1 |
20070006098 | Krumm et al. | Jan 2007 | A1 |
20070008110 | Li et al. | Jan 2007 | A1 |
20070015515 | Matsuda | Jan 2007 | A1 |
20070016368 | Chapin et al. | Jan 2007 | A1 |
20070021208 | Mao et al. | Jan 2007 | A1 |
20070024527 | Heikkinen et al. | Feb 2007 | A1 |
20070032943 | Okabe | Feb 2007 | A1 |
20070053056 | Charlesworth et al. | Mar 2007 | A1 |
20070060114 | Ramer et al. | Mar 2007 | A1 |
20070078596 | Grace | Apr 2007 | A1 |
20070080216 | Ward et al. | Apr 2007 | A1 |
20070091172 | Lee | Apr 2007 | A1 |
20070091292 | Cho et al. | Apr 2007 | A1 |
20070100834 | Landry et al. | May 2007 | A1 |
20070104348 | Cohen | May 2007 | A1 |
20070118278 | Finn et al. | May 2007 | A1 |
20070130217 | Linyard et al. | Jun 2007 | A1 |
20070139366 | Dunko et al. | Jun 2007 | A1 |
20070150444 | Chesnais et al. | Jun 2007 | A1 |
20070161382 | Melinger et al. | Jul 2007 | A1 |
20070162942 | Hamynen et al. | Jul 2007 | A1 |
20070165554 | Jefferson et al. | Jul 2007 | A1 |
20070219706 | Sheynblat | Sep 2007 | A1 |
20070219708 | Brasche et al. | Sep 2007 | A1 |
20070230747 | Dunko | Oct 2007 | A1 |
20070233385 | Dicke et al. | Oct 2007 | A1 |
20070242661 | Tran | Oct 2007 | A1 |
20070244633 | Phillips et al. | Oct 2007 | A1 |
20070259716 | Mattice et al. | Nov 2007 | A1 |
20070260398 | Stelpstra et al. | Nov 2007 | A1 |
20070271317 | Carmel | Nov 2007 | A1 |
20070272738 | Berkun | Nov 2007 | A1 |
20070274563 | Jung et al. | Nov 2007 | A1 |
20070275691 | Boda | Nov 2007 | A1 |
20070282564 | Sprague et al. | Dec 2007 | A1 |
20070290037 | Arellanes et al. | Dec 2007 | A1 |
20080004802 | Horvitz | Jan 2008 | A1 |
20080027632 | Mauderer | Jan 2008 | A1 |
20080028325 | Ferren et al. | Jan 2008 | A1 |
20080036586 | Ohki | Feb 2008 | A1 |
20080036766 | Ishii et al. | Feb 2008 | A1 |
20080043108 | Jung et al. | Feb 2008 | A1 |
20080046298 | Ben-Yehuda et al. | Feb 2008 | A1 |
20080056535 | Bergmann et al. | Mar 2008 | A1 |
20080059578 | Albertson et al. | Mar 2008 | A1 |
20080065322 | Ng et al. | Mar 2008 | A1 |
20080065325 | Geelen et al. | Mar 2008 | A1 |
20080071620 | Lowe | Mar 2008 | A1 |
20080077319 | Kato et al. | Mar 2008 | A1 |
20080082254 | Huhtala et al. | Apr 2008 | A1 |
20080090591 | Miller et al. | Apr 2008 | A1 |
20080091518 | Eisenson et al. | Apr 2008 | A1 |
20080091537 | Miller et al. | Apr 2008 | A1 |
20080092057 | Monson et al. | Apr 2008 | A1 |
20080097698 | Arnold-Huyser et al. | Apr 2008 | A1 |
20080113674 | Baig | May 2008 | A1 |
20080122785 | Harmon | May 2008 | A1 |
20080122871 | Guday | May 2008 | A1 |
20080132249 | Hamilton | Jun 2008 | A1 |
20080132251 | Altman et al. | Jun 2008 | A1 |
20080140835 | Bradley et al. | Jun 2008 | A1 |
20080147730 | Lee et al. | Jun 2008 | A1 |
20080161018 | Miller et al. | Jul 2008 | A1 |
20080172374 | Wolosin et al. | Jul 2008 | A1 |
20080172496 | Middleton et al. | Jul 2008 | A1 |
20080174679 | Tanino | Jul 2008 | A1 |
20080183380 | Blackwood | Jul 2008 | A1 |
20080192005 | Elgoyhen et al. | Aug 2008 | A1 |
20080195759 | Novik et al. | Aug 2008 | A1 |
20080201074 | Bleckman et al. | Aug 2008 | A1 |
20080214166 | Ramer et al. | Sep 2008 | A1 |
20080215202 | Breed | Sep 2008 | A1 |
20080228429 | Huang et al. | Sep 2008 | A1 |
20080234931 | Wang et al. | Sep 2008 | A1 |
20080248815 | Busch | Oct 2008 | A1 |
20080250337 | Lemmela et al. | Oct 2008 | A1 |
20080268855 | Hanuni et al. | Oct 2008 | A1 |
20080268876 | Gelfand et al. | Oct 2008 | A1 |
20080273109 | Bamford | Nov 2008 | A1 |
20080281794 | Mathur | Nov 2008 | A1 |
20080288486 | Kim et al. | Nov 2008 | A1 |
20080293431 | Buerger et al. | Nov 2008 | A1 |
20090003659 | Forstall et al. | Jan 2009 | A1 |
20090005021 | Forstall et al. | Jan 2009 | A1 |
20090005076 | Forstall et al. | Jan 2009 | A1 |
20090005077 | Forstall et al. | Jan 2009 | A1 |
20090005080 | Forstall et al. | Jan 2009 | A1 |
20090005968 | Vengroff et al. | Jan 2009 | A1 |
20090005987 | Vengroff et al. | Jan 2009 | A1 |
20090006194 | Sridharan et al. | Jan 2009 | A1 |
20090006345 | Platt et al. | Jan 2009 | A1 |
20090030778 | Zapata et al. | Jan 2009 | A1 |
20090031258 | Arrasvuori et al. | Jan 2009 | A1 |
20090033540 | Breed et al. | Feb 2009 | A1 |
20090036145 | Rosenblum | Feb 2009 | A1 |
20090037273 | Zhu | Feb 2009 | A1 |
20090040370 | Varanasi | Feb 2009 | A1 |
20090051648 | Shamaie et al. | Feb 2009 | A1 |
20090054077 | Gauthier et al. | Feb 2009 | A1 |
20090076723 | Moloney | Mar 2009 | A1 |
20090102859 | Athsani et al. | Apr 2009 | A1 |
20090111434 | Yu et al. | Apr 2009 | A1 |
20090143078 | Tu et al. | Jun 2009 | A1 |
20090163228 | Blumberg et al. | Jun 2009 | A1 |
20090192704 | Geelen | Jul 2009 | A1 |
20090198767 | Jakobson et al. | Aug 2009 | A1 |
20090207184 | Laine et al. | Aug 2009 | A1 |
20090237328 | Gyorfi et al. | Sep 2009 | A1 |
20090248288 | Bell et al. | Oct 2009 | A1 |
20090259568 | Lee | Oct 2009 | A1 |
20090265671 | Sachs et al. | Oct 2009 | A1 |
20090287527 | Kolb et al. | Nov 2009 | A1 |
20090315766 | Khosravy et al. | Dec 2009 | A1 |
20090315775 | Khosravy et al. | Dec 2009 | A1 |
20090315776 | Khosravy et al. | Dec 2009 | A1 |
20090315995 | Khosravy et al. | Dec 2009 | A1 |
20090318168 | Khosravy et al. | Dec 2009 | A1 |
20090319166 | Khosravy et al. | Dec 2009 | A1 |
20090319175 | Khosravy et al. | Dec 2009 | A1 |
20090319177 | Khosravy et al. | Dec 2009 | A1 |
20090319178 | Khosravy et al. | Dec 2009 | A1 |
20090319181 | Khosravy et al. | Dec 2009 | A1 |
20090319348 | Khosravy et al. | Dec 2009 | A1 |
20100008255 | Khosravy et al. | Jan 2010 | A1 |
20100009662 | Khosravy et al. | Jan 2010 | A1 |
20100016022 | Liu et al. | Jan 2010 | A1 |
20100030646 | Riise et al. | Feb 2010 | A1 |
20100076968 | Boyns et al. | Mar 2010 | A1 |
20100125622 | White et al. | May 2010 | A1 |
20100125816 | Bezos | May 2010 | A1 |
20100156812 | Stallings et al. | Jun 2010 | A1 |
20100205628 | Davis et al. | Aug 2010 | A1 |
20100214111 | Schuler et al. | Aug 2010 | A1 |
20100228612 | Khosravy et al. | Sep 2010 | A1 |
20100332324 | Khosravy et al. | Dec 2010 | A1 |
20110006977 | Khosravy et al. | Jan 2011 | A1 |
20110046879 | Celli et al. | Feb 2011 | A1 |
20110093227 | Huang et al. | Apr 2011 | A1 |
20110159857 | Faith et al. | Jun 2011 | A1 |
20120190386 | Anderson | Jul 2012 | A1 |
20120264457 | Khosravy et al. | Oct 2012 | A1 |
20130265223 | Khosravy et al. | Oct 2013 | A1 |
20150022549 | Khosravy et al. | Jan 2015 | A1 |
20150066365 | Khosravy et al. | Mar 2015 | A1 |
20160057581 | Khosravy et al. | Feb 2016 | A1 |
20170249748 | Khosravy et al. | Aug 2017 | A1 |
20180359607 | Khosravy et al. | Dec 2018 | A1 |
Number | Date | Country |
---|---|---|
1857944 | Nov 2007 | EP |
2000123027 | Apr 2000 | JP |
2001312507 | Nov 2001 | JP |
2002024698 | Jan 2002 | JP |
2002140620 | May 2002 | JP |
2002238080 | Aug 2002 | JP |
2002245333 | Aug 2002 | JP |
2003242407 | Aug 2003 | JP |
2005044427 | Feb 2005 | JP |
2006023793 | Jan 2006 | JP |
2006044512 | Feb 2006 | JP |
2006323790 | Nov 2006 | JP |
2007072730 | Mar 2007 | JP |
2007234056 | Sep 2007 | JP |
2008040884 | Feb 2008 | JP |
2008257644 | Oct 2008 | JP |
2009080662 | Apr 2009 | JP |
9855833 | Dec 1998 | WO |
9942947 | Aug 1999 | WO |
0000566 | Jan 2000 | WO |
0005666 | Feb 2000 | WO |
0135307 | May 2001 | WO |
0188687 | Nov 2001 | WO |
02073818 | Sep 2002 | WO |
02095535 | Nov 2002 | WO |
3047285 | Jun 2003 | WO |
2004057368 | Jul 2004 | WO |
2005101200 | Oct 2005 | WO |
2005116794 | Dec 2005 | WO |
2006024873 | Apr 2006 | WO |
2006081575 | Aug 2006 | WO |
2007021996 | Feb 2007 | WO |
2007132055 | Nov 2007 | WO |
2008007260 | Jan 2008 | WO |
2008014255 | Jan 2008 | WO |
Entry |
---|
Notice of Allowance dated Jul. 28, 2017 cited in U.S. Appl. No. 15/516,066. |
“Non Final Office Action Issued in U.S. Appl. No. 12/364,936”, dated Nov. 2, 2018, 22 Pages. |
“Non Final Office Action Issued in U.S. Appl. No. 12/400,087”, dated Sep. 6, 2018, 34 Pages. |
“Non Final Office Action Issued in U.S. Appl. No. 15/596,959”, dated Oct. 26, 2018, 14 Pages. |
“Office Action Issued in Chinese Patent Application No. 201080011811.1”, dated Aug. 29, 2013, 14 Pages. |
“International Search Report and Written Opinion Issued in PCT Application No. PCT/US2010/025684”, dated Sep. 28, 2010, 9 Pages. |
International Preliminary Report on Patentability issued in PCT Application No. PCT/US2016/024699 dated Jul. 31, 2017. |
“Supplemental Search Report Issued in European Patent Application No. 08729329.6”, dated Apr. 13, 2016, 8 Pages. |
“Administrator's Guide”, Red Hat Directory Server, Version 7.1, May, 2005, 656 Pages. |
“Advantages of Microsoft Merge Replication for Mobile and Distributed Applications”, Retrieved From <<http://download.microsoft.com/download/3/d/9/3d93d494-6ad0-4651-86de-09e1bd43d03f/sq12005mergecomparitive.doc>>, Feb. 2006, 13 Pages. |
“Efficasoft GPS Utilities”, Retrieved From <<http://www.clickapps.com/moreinfo.htm?pid=14274§ion=PPC&PHPSESSID=af4-3ec3daed820b0e01d0e8cfa68849b&T091620080618=1u>>, Jun. 19, 2008, 3 Pages. |
“Hi-406bt-C Bluetooth GPS Receiver with Digital Compass”, Retrieved From <<https://web.archive.org/web/20080510155917/http://13030597.trustpass.alibaba.com/product/11705884/Hi_406bt_C_Bluetooth_GPS_Receiver_With_Digital_Compass.html>>, Jun. 19, 2008, 3 Pages. |
“New Technology Product Links Online Shoppers With Brick-And-Mortar Merchants; Yclip, First Data”, Retrieved From <<http://www.allbusiness.com/marketing-advertising/6443230-1.html>>, May 18, 2000, 3 Pages. |
“POI Alert”, Retrieved From <<https://web.archive.org/web/20090228060539/http://www.wayviewer.de/en/poialert.html>>, Retrieved Date: Mar. 20, 2009, 4 Pages. |
“Sense Networks Launches Software Platform That Indexes the real World Using Mobile Location Data”, Retrieved From <<https://web.archive.org/web/20080613200334/http://www.lbszone.com/content/view/3439/2/>>, Jun. 9, 2008, 1 Page. |
“Sony NV-U80 Widescreen Portable Navigation”, Retrieved From <<https://web.archive.org/web/20070210062136/http://incarexpress.co.uk/view_product.php?partno=NVU80>>, Retrieved Date: Mar. 17, 2009, 2 Pages. |
“Sony NV-U92T Sat Nav Systems”, Retrieved From <<http://www.satellitenavigation.org.uk/category/sony/page/2/>>, Mar. 17, 2009, 10 Pages. |
“The iPointer Platform Next Generation Location-Based Services Today”, Retrieved From <<https://web.archive.org/web/20090411120157/http://www.i-spatialtech.com/PDF/ipointer_data_sheet.pdf>>, May 19, 2009, 2 Pages. |
“Office Action Issued in European Application No. 10751175.0”, dated Jul. 15, 2016, 5 Pages. |
“Supplementary European Search Report Issued in European Patent Application No. 10751175.0”, dated Apr. 10, 2015, 14 Pages. |
“Supplementary Search Report Issued in European Patent Application No. 10751175.0”, dated Nov. 14, 2014, 8 Pages. |
“Search Report Issued in European Patent Application No. 10792585.1”, dated Jul. 31, 2014,8 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 11/673,415”, dated Nov. 25, 2008, 21 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 11/673,415”, dated Jul. 14, 2009, 12 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/362,093”, dated Sep. 28, 2011, 17 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/362,093”, dated Apr. 27, 2011, 15 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 12/362,093”, dated Sep. 23, 2013, 11 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/363,655”, dated Dec. 19, 2013, 13 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/363,655”, dated Jan. 5, 2012, 11 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/363,655”, dated Sep. 20, 2011, 18 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/363,655”, dated Jan. 29, 2013, 14 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/364,936”, dated Jun. 7, 2012, 10 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/364,936”, dated Aug. 28, 2015, 18 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/364,936”, dated May 4, 2017, 19 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/364,936”, dated Jul. 14, 2016, 19 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/364,936”, dated Jan. 30, 2015, 14 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/364,936”, dated Jun. 24, 2014, 12 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/364,936”, dated Oct. 5, 2011, 13 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/400,087”, dated Mar. 16, 2012, 11 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/400,087”, dated May 26, 2015, 16 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/400,087”, dated Feb. 28, 2017, 20 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/400,087”, dated Mar. 1, 2016, 17 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/400,087”, dated Nov. 20, 2014, 13 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/400,087”, dated May 22, 2014, 11 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/400,087”, dated Oct. 11, 2011, 10 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/437,857”, dated Jul. 27, 2011, 11 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/437,857”, dated May 31, 2011, 13 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/437,857”, dated Oct. 19, 2012, 16 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 12/437,857”, dated Feb. 14, 2013, 8 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/437,863”, dated Feb. 7, 2012, 20 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/437,863”, dated Jan. 3, 2013, 23 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/437,863”, dated Sep. 26, 2013, 18 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/437,863”, dated Jun. 1, 2016, 28 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/437,863”, dated Oct. 23, 2014, 18 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/437,863”, dated Sep. 10, 2015, 26 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/437,863”, dated Jun. 6, 2013, 20 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/437,863”, dated Aug. 30, 2012, 22 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/437,863”, dated Jun. 22, 2011, 17 Pages. |
“Non Final Office Action Issued in U.S. Appl. No. 12/400,087”, dated Mar. 21, 2019, 23 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/476,406”, dated Apr. 18, 2011, 13 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/476,406”, dated Oct. 14, 2011, 16 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 12/476,406”, dated Jun. 21, 2012, 12 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/476,417”, dated Jan. 11, 2012, 9 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/476,417”, dated Feb. 24, 2015, 12 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/476,417”, dated Jul. 23, 2014, 9 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/476,417”, dated Aug. 9, 2011, 20 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 12/476,417”, dated Jul. 22, 2015, 15 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/476,426”, dated Dec. 7, 2011, 16 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/476,426”, dated Aug. 3, 2011, 17 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 12/476,426”, dated Jul. 16, 2014, 14 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 12/476,426”, dated Feb. 24, 2016, 13 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/483,920”, dated Dec. 12, 2011, 15 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 12/483,920”, dated May 7, 2012, 10 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/483,982”, dated Oct. 17, 2011, 23 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/483,982”, dated Jul. 20, 2011, 19 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 12/483,982”, dated Feb. 21, 2012, 12 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/491,519”, dated Mar. 16, 2012,10 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/491,519”, dated May 22, 2015, 23 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/491,519”, dated May 22, 2014, 11 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/491,519”, dated Sep. 12, 2014, 10 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/491,519”, dated Sep. 29, 2011, 12 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/499,016”, dated Aug. 8, 2012, 24 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/499,016”, dated Apr. 10, 2013, 28 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/499,016”, dated Feb. 26, 2014, 36 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/499,016”, dated Sep. 23, 2013, 33 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/499,016”, dated Jan. 10, 2013, 29 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/499,016”, dated Mar. 1, 2012, 25 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 12/499,016”, dated Jun. 20, 2014, 7 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/536,889”, dated May 24, 2012, 14 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/536,889”, dated Feb. 10, 2012, 15 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 12/536,889”, dated Oct. 29, 2013, 11 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/536,917”, dated Mar. 16, 2012, 9 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/536,917”, dated Sep. 25, 2015, 13 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/536,917”, dated Jan. 5, 2015, 13 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/536,917”, dated Jun. 6, 2014, 7 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/536,917”, dated Oct. 6, 2011, 12 Pages. |
“Final Office Action Issued in U.S. Appl. No. 12/536,937”, dated Jun. 21, 2012, 7 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/536,937”, dated Feb. 24, 2014, 14 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 12/536,937”, dated Nov. 9, 2011, 11 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 14/453,312”, dated Jan. 4, 2017, 7 Pages. |
“Final Office Action Issued in U.S. Appl. No. 13/485,320”, dated Dec. 19, 2012, 16 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 13/485,320”, dated Sep. 10, 2012, 15 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 13/485,320”, dated Aug. 21, 2013, 15 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 13/908,737”, dated Dec. 24, 2013, 7 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 13/908,737”, dated May 14, 2014, 9 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 14/453,312”, dated Mar. 10, 2017, 8 Pages. |
“Final Office Action Issued in U.S. Appl. No. 14/505,456”, dated May 12, 2016, 20 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 14/505,456”, dated Aug. 26, 2016, 20 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 14/505,456”, dated Nov. 18, 2015, 31 Pages. |
Office Action cited in U.S. Appl. No. 14/934,008 dated Nov. 1, 2017. |
“Final Office Action Issued in U.S. Appl. No. 15/596,959”, dated May 14, 2019, 17 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 14/505,456”, dated Feb. 1, 2017, 11 Pages. |
“Final Office Action Issued in China Application No. 201080011811.1”, dated Apr. 27, 2015, 8 Pages. |
“Office Action Issued in Chinese Patent Application No. 201080011811.1”, dated Jan. 25, 2017, 16 Pages. |
“Second Office Action Received in China Patent Application No. 201080011811.1”, dated Apr. 18, 2014, 9 Pages. |
“Third Office Action received for Chinese Patent Application No. 201080011811.1”, dated Oct. 17, 2014, 10 Pages. |
“Notice of Allowance Received in Japan Patent Application No. 2011-554074”, dated Dec. 10, 2013, 3 Pages. |
“Notice of Allowance Issued in Japanese Patent Application No. 2012517668”, dated Jan. 23, 2014, 4 Pages. |
Benshoof, Paul, “Civilian GPS Systems and Potential Vulnerabilities”, Retrieved From <<http://www.gps.gov/cgsic/international/2005/prague/benshoof.ppt>>, Sep. 2005, 23 Pages. |
Bond, Langhorne, “GNSS Sole Means of Navigation and the Future Mix of Navigation Systems in ATC”, In Proceedings of the Conference on ICAO/CANSO , May 18, 2005, 5 Pages. |
Brogan, Michael, “Enhancing Digital Rights Management Using the Family Domain”, In Proceedings of the 4th Winona Computer Science Undergraduate Research Symposium, Apr. 20, 2004, pp. 22-28. |
Brown, et al., “GPS Tuner from Megalith”, Retrieved From <<https://web.archive.org/web/20080603075542/http://www.clieuk.co.uk/gpstuner.shtml>>, Jun. 19, 2008, 9 Pages. |
Coatta, et al., “A Data Synchronization Service for Ad Hoc Groups”, In Proceedings of the IEEE Conference on Wireless Communications and Networking, vol. 1, Mar. 2004, pp. 483-488. |
Denham, et al., “Getting from Point A to Point B: A Review of Two GPS Systems”, In AFB Access World, vol. 5, Issue 6, Nov. 2004, 10 Pages. |
Egenhofer, et al., “Beyond Desktop GIS”, Retrieved From <<https://pdfs.semanticscholar.org/b814/61ee9b5df495cc0b1f7950eefd437be9acf9.pdf>>, Jan. 29, 2009, 3 Pages. |
Hariharan, et al., “Web-Enhanced GPS”, In Proceedings of the 1st International Conference on Location- and Context-Awareness, May 12, 2005, 10 Pages. |
Iwasaki, et al., “Azim: Direction Based Service using Azimuth Based Position Estimation”, In Proceedings of the 24th International Conference on Distributed Computing Systems, Mar. 26, 2004, 10 Pages. |
Jaques, Robert, “Vendors Plug-in to Connected Navigation”, Retrieved From <<https://web.archive.org/web/20080423044425/http://www.vnunet.com/vnunet/news/2214407/vendors-plug-connected>>, Apr. 16, 2008, 2 Pages. |
Jenabi, et al., “Finteraction—Finger Interaction with Mobile Phone”, In Proceedings of the Workshop of the Future Mobile Experiences, Oct. 19, 2008, 4 Pages. |
Juszczyk, et al., “Web Service Discovery, Replication, and Synchronization in Ad-Hoc Networks”, In Proceedings of the First International Conference on Availability, Reliability and Security, Apr. 20, 2006, 8 Pages. |
Kim, et al., “Efficient and Dynamic Location-based Event Service for Mobile Computing Environments”, In Proceedings of the Fifth International Conference on Computational Science and Applications, Aug. 26, 2007, 7 Pages. |
Kratz, et al., “Gesture Recognition Using Motion Estimation on Mobile Phones”, In Proceedings of 3rd International Workshop on Pervasive Mobile Interaction Devices, Dec. 2007, 5 Pages. |
Kwok, et al., “A License Management Model to Support B2C and C2C Music Sharing”, In International Journal of Information Technology & Decision Making, vol. 01, Issue 03, Sep. 2002, 2 Pages. |
Liaw, Kim Poh, “Verizon Wireless Releases VZ Navigator Version 4”, Retrieved from <<https://web.archive.org/web/20080510175354/http://www.slashphone.com/verizon-wireless-releases-vz-navigator-version-4-09438>>, May 9, 2008, 6 Pages. |
Liu, et al., “A License-Sharing Scheme in Digital Rights Management”, In Cooperative Research Centres-Smart Internet Technology, Jul. 2004, 13 Pages. |
Marsh, George, “Sole Source Dead: Long Live Loran?”, Retrieved From <<http://www.aviationtoday.com/av/issue/feature/920.html>, Jun. 1, 2004, 4 Pages. |
Marshall, Chris, “Geotagging with GPS Capture and Process”, Retrieved From <<http://www.ece.utah.edu/˜ccharles/clinic/Geotate_CP_White_Paper.pdf>>, Sep. 19, 2008, 25 Pages. |
Mircea, et al., “CellID Positioning Method for Virtual Tour Guides Travel Services”, In Proceedings of the International Conference Second Edition on Electronics, Computers and Artificial Intelligence, Jun. 29, 2007, 6 Pages. |
Mitchell, Christopher, “Use GPS and Web Maps for Location-Aware Apps”, Retrieved From <<http://msdn.microsoft.com/en-us/magazine/2009.01.wm6gps(printer).aspx>>, Mar. 20, 2009, 6 Pages. |
Pashtan, et al., “Personal Service Areas for Mobile Web Applications”, In IEEE Internet Computing, vol. 8, Issue 6, Nov. 15, 2004, 7 Pages. |
Rashid, et al., “Implementing Location Based Information/Advertising for Existing Mobile Phone Users in Indoor/Urban Environments”, In Proceedings of the International Conference on Mobile Business, Jul. 11, 2005, 7 Pages. |
Reti, et al., “DiMaS: Distributing Multimedia on Peer-to-Peer File Sharing Networks”, In Proceedings of the 12th Annual ACM International Conference on Multimedia, Oct. 10, 2004, 2 Pages. |
Robinson, et al., “Point-to-GeoBlog: Gestures and Sensors to Support User Generated Content Creation”, In Proceedings of the 10th international conference on Human computer interaction with mobile devices and services, Sep. 2, 2008, pp. 197-206. |
Rossmuller, Nic, “Digital SLR GPS system”, Retrieved From <<http://www.letsgodigital.org/en/13416/sir_camera_gps_system/>>, Mar. 11, 2007, 3 Pages. |
Sagiraju, et al., “A Novel Advertising Application Using GPS and GIS”, Retrieved From <<https://web.archive.org/web/20080808135520/http://www.gisdevelopment.net/application/Miscellaneous/mi08_67.htm>>, Mar. 24, 2009, 5 Pages. |
Simon, et al., “Towards Orientation-Aware Location Based Mobile Services”, In Part of the series Lecture Notes in Geoinformation and Cartography, 2007, 8 Pages. |
Solyman, Aymen A., “IbnBatota—Technology for a Mobile Map Application”, Retrieved From «http://www.directionsmag.com/article.php?article_id=807&trv=1>>, Mar. 17, 2009, Retrieved Date: Mar. 17, 2009, 6 Pages. |
Sonntag, Daniel, “Context-Sensitive Multimodal Mobile Interfaces”, In Proceedings of the 9th International Conference on Human Computer Interaction with Mobile Devices and Services, Sep. 9, 2007, pp. 142-148. |
Stewart, et al., “Accessible Contextual Information for Urban Orientation”, In Proceedings of the 10th International Conference on Ubiquitous Computing, Sep. 21, 2008, 4 Pages. |
Stojanovic, et al., “Modeling and Querying Mobile Objects in Location-Based Services”, In Journal of Facta Universitatis, vol. 18, 2003, 22 Pages. |
Trusca, Sorin, “Sanoodi Releases SMap, a Free GPS RouteRecording Mobile Application”, Retrieved From <<http://news.softpedia.com/news/Sanoodi-Releases-SMap-a-Free-GPS-Route-Recording-Mobile-Application-96626.shtml>>, Oct. 28, 2008, 2 Pages. |
Weider, et al., “LDAP Multi-Master Replication Protocol”, Network Working Group—Internet—Draft, Mar. 20, 1997, 11 Pages. |
Werbach, Kevin, “Location-Based Computing: Wherever You Go, There You Are. Esther Dyson's Monthly Report”, In Esther Dyson's Monthly Report, vol. 18, Issue 6, Jun. 28, 2000, 32 Pages. |
Number | Date | Country | |
---|---|---|---|
20170269703 A1 | Sep 2017 | US |
Number | Date | Country | |
---|---|---|---|
61074415 | Jun 2008 | US | |
61074590 | Jun 2008 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14453312 | Aug 2014 | US |
Child | 15617248 | US | |
Parent | 13908737 | Jun 2013 | US |
Child | 14453312 | US | |
Parent | 12437857 | May 2009 | US |
Child | 13908737 | US |