Inference pipeline system and method

Information

  • Patent Grant
  • 12143884
  • Patent Number
    12,143,884
  • Date Filed
    Monday, July 31, 2017
    7 years ago
  • Date Issued
    Tuesday, November 12, 2024
    3 months ago
Abstract
A system to infer place data is disclosed that receives location data collected on a user's mobile electronic device, recognizes when, where and for how long the user makes stops, generates possible places visited, and predicts the likelihood of a user to visit those places.
Description
BACKGROUND

There are a variety of existing technologies which track and monitor location data. One example is a Global Positioning Satellite (GPS) system which captures location information at regular intervals from earth-orbiting satellites. Another example is a radio frequency identification (RFID) system which identifies and tracks the location of assets and inventory by affixing a small microchip or tag to an object or person being tracked. Tracking of individuals, devices and goods may be performed using WiFi (IEEE 802.11), cellular wireless (2G, 3G, 4G, etc.) and other WLAN, WAN and other wireless communications technologies.


Additional technologies exist which use geographical positioning to provide information or entertainment services based on a user's location. In one example, an individual uses a mobile device to identify the nearest ATM or restaurant based on his or her current location. Another example is the delivery of targeted advertising or promotions to individuals whom are near a particular eating or retail establishment.


In existing systems, received information, such as both user data and place data are noisy. User location data can be noisy due to poor GPS reception, poor Wi-Fi reception, or weak cell phone signals. Similarly, mobile electronic devices can lack certain types of sensors or have low quality sensor readings. In the same way, the absence of a comprehensive database of places with sufficient coverage and accurate location information causes place data to also be noisy.


The need exists for a method that utilizes location data to accurately identify the location of people, objects, goods, etc., as well as provide additional benefits. Overall, the examples herein of some prior or related systems and their associated limitations are intended to be illustrative and not exclusive. Other limitations of existing or prior systems will be become apparent to those skilled in the art upon reading the following Detailed Description.





BRIEF DESCRIPTION OF THE DRAWINGS

The details of one or more implementations are set forth in the accompanying drawings and the description below. Further features of the invention, its nature and various advantages, will be apparent from the following detailed description and drawings, and from the claims.


Examples of a system and method for a data collection system are illustrated in the figures. The examples and figures are illustrative rather than limiting.



FIG. 1 depicts an example of an inference pipeline system and an environment in which one embodiment of the inference pipeline system can operate.



FIG. 2 depicts a high-level view of the workflow of the inference pipeline.



FIG. 3 illustrates an example path of a user and the subsequent data that can be derived and analyzed by the inference pipeline system.



FIG. 4 depicts a block diagram of example components in an embodiment of the analytics server of the inference pipeline system.



FIGS. 5A and 5B depict example inputs and outputs to a clustering algorithm.



FIG. 6 depicts a location trace based method of detecting movement.



FIG. 7 depicts an example presentation of a personal location profile.





DETAILED DESCRIPTION

An inference pipeline system and method which incorporates validated location data into inference models is described herein. Given a user's information collected from a mobile electronic device, the inference pipeline recognizes whether a user visited a place, and if so, the probability of the user at a place, and how much time the user spent at the place. It also produces user location profiles, which include information about familiar routes and places.


In some cases, the inference pipeline systems and methods are part of a larger platform for identifying and monitoring a user's location. For example, the inference pipeline system can be coupled to a data collection system which collects and validates location data from a mobile device. Collected user information includes location data such as latitude, longitude, or altitude determinations, sensor data from, for example, compass/bearing data, accelerometer or gyroscope measurements, and other information that can be used to help identify a user's location and activity. Additional details of the data collection system can be found in U.S. patent application Ser. No. 13/405,182.


A place includes any physical establishment such as a restaurant, a park, a grocery store, or a gas station. Places can share the same name. For example, a Starbucks café in one block and a Starbucks café in a different block are separate places. Places can also share the same address. For example, a book store and the coffee shop inside are separate places. Each place can have attributes which include street address, category, hours of operation, customer reviews, popularity, and other information.


In one embodiment, the inference pipeline recognizes when a user visits a place based on location and sensor data. As an example, the inference pipeline system recognizes when a user makes a stop. Next, the place where the user has stopped can be predicted by searching various data sources, combining signals such as place attributes, collecting data from a mobile electronic device, harvesting user demographic and user profile information, monitoring external factors such as season, weather, and events, and using an inference model to generate the probabilities of a user visiting a place.


In another embodiment, the inference pipeline combines various signals to rank all the possible places a user could be visiting. In another embodiment, the inference pipeline estimates the probability of a user visiting a place and the time user has spent at a place.


Various examples of the invention will now be described. The following description provides certain specific details for a thorough understanding and enabling description of these examples. One skilled in the relevant technology will understand, however, that the invention may be practiced without many of these details. Likewise, one skilled in the relevant technology will also understand that the invention may include many other obvious features not described in detail herein. Additionally, some well-known structures or functions may not be shown or described in detail below, to avoid unnecessarily obscuring the relevant descriptions of the various examples.


The terminology used below is to be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain specific examples of the invention. Indeed, certain terms may even be emphasized below; however, any terminology intended to be interpreted in any restricted manner will be overtly and specifically defined as such in this Detailed Description section.



FIG. 1 and the following discussion provide a brief, general description of a representative environment 100 in which an inference pipeline system 120 can operate. A user device 102 is shown which moves from one location to another. As an example, user device 102 moves from a location A 104 to location B 106 to location C 108. The user device 102 may be any suitable device for sending and receiving communications and may represent various electronic systems, such as personal computers, laptop computers, tablet computers, mobile phones, mobile gaming devices, or the like. Those skilled in the relevant art will appreciate that aspects of the invention can be practiced with other communications, data processing, or computer system configurations, including: Internet appliances, hand-held devices [including personal digital assistants (PDAs)], wearable computers, all manner of cellular or mobile phones [including Voice over IP (VoIP) phones], dumb terminals, media players, gaming devices, multi-processor systems, microprocessor-based or programmable consumer electronics, and the like.


As the user device 102 changes locations, the inference pipeline system 120 receives location information through a communication network 110. Network 110 is capable of providing wireless communications using any suitable short-range or long-range communications protocol (e.g., a wireless communications infrastructure including communications towers and telecommunications servers). In other embodiments, network 110 may support Wi-Fi (e.g., 802.11 protocol), Bluetooth, high-frequency systems (e.g., 2G/3G/4G, 900 MHz, 2.4 GHz, and 5.6 GHz communication systems), infrared, or other relatively localized wireless communication protocol, or any combination thereof. As such, any suitable circuitry, device, and/or system operative to create a communications network may be used to create network 110. In some embodiments, network 110 supports protocols used by wireless and cellular phones. Such protocols may include, for example, GSM, GSM plus EDGE, CDMA, quad-band, and other cellular protocols. Network 110 also supports long range communication protocols (e.g., Wi-Fi) and protocols for placing and receiving calls using VoIP or LAN.


As will be described in additional detail herein, the inference pipeline system 120 comprises of an analytics server 122 coupled to a database 124. Indeed, the terms “system.” “platform,” “server,” “host,” “infrastructure,” and the like are generally used interchangeably herein, and may refer to any computing device or system or any data processor.


1. INPUT/OUTPUT

This section describes inputs and outputs of the inference pipeline.


1.1 Input


The input to the inference pipeline is a sequence of location and/or sensor readings that have been logged by the mobile electronic device. For example, the data may come from GPS, Wi-Fi networks, cell phone triangulation, sensor networks, other indoor or outdoor positioning technologies, sensors in the device itself or embedded in a user's body or belongings, and geo-tagged contents such as photos and text.


For example, for location data from GPS, each location reading includes a time stamp, location source, latitude, longitude, altitude, accuracy estimation, bearing and speed. Each sensor reading includes a time stamp, type of sensor, and values.


The frequency of location and sensor readings depends on how the user is tracked (e.g., continuously-tracked, session-based).


1.2 User Data Acquisition


Data may be acquired from the user through various methods. In one embodiment, there are two user data acquisition methods. The first is by continuously tracking users whom have a tracking application installed and running at all times. For these users, locations are logged with a low frequency to conserve battery life, such as once per minute.


The second method is session-based whereby users are indirectly tracked through third-parties. When to start and end tracking is controlled by the third-party application or device. When a tracking session begins, a user location is logged with a high frequency, to compensate for potentially short usage time. As an example, table 1 provides example input data to the inference pipeline, such as location readings.


Table 1 shows accuracy values which are sometimes available from location providers. For example, a device with the Android operating system produces accuracy estimations in meters for GPS, WiFi, and cell-tower triangulated locations. For GPS, accuracy estimations can be within 50 meters while cell-phone tower triangulations have accuracy estimations within 1500 meters. In the example shown in Table 1, the higher the accuracy value, the less accurate the reading.













TABLE 1





Time
Latitude
Longitude
Altitude
Accuracy







Fri, 22 Jul. 2011
−28.167926
153.528904
48.599998
19


01:39:18 GMT


Fri, 22 Jul. 2011
−28.167920
153.528890
45.700001
17


01:39:19 GMT


Fri, 22 Jul. 2011
−28.167922
153.528866
47.700001
15


01:39:20 GMT










1.3 Output


The output of the inference pipeline is a list of places that the user is predicted to have visited. Each predicted place includes place name, place address, start time of the visit and end time of the visit. The inference pipeline system includes maintaining a place database as will be discussed herein. Thus, each prediction also has an identifier to the place entry in the database so that other information about the place is accessible. As an example, Table 2 provides example output data from the inference pipeline, such as place predictions.











TABLE 2





From
To
Place







Mon, 7 Nov. 2011
Mon, 7 Nov. 2011
Valley Curls


19:18:32 GMT
20:02:21 GMT


Mon, 7 Nov. 2011
Mon, 7 Nov. 2011
Mesquite City Hall


20:17:24 GMT
20:24:11 GMT









2. WORKFLOW


FIG. 2 is a high-level view 200 of the workflow of the inference pipeline. The pipeline takes raw location and sensor readings as input and generates probabilities that a user has visited a place.


For each data acquisition mode (i.e., continuously-tracked, and session-based), different techniques are used to predict the location of the user. This section focuses on the first type, continuously tracked users. The other type, session users, will be discussed later.


First the location readings, ordered by time stamp, are passed to a temporal clustering algorithm that produces a list of location clusters. Each cluster consists of a number of location readings that are chronologically continuous and geographically close to each other. The existence of a cluster indicates that the user was relatively stationary during a certain period of time. For example, if a user stopped by a Shell gas station from 8:30 AM to 8:40 AM, drove to a Starbucks coffee at 9:00 AM, and left the coffee shop at 9:20 AM, ideally two clusters should be generated from the location readings in this time period. The first cluster is made up of a number of location readings between 8:30 AM and 8:40 AM, and those locations should be very close to the actual location of the gas station. The second cluster is made up of a number of location readings between 9:00 AM and 9:20 AM, and those locations should be very close to the coffee shop. Any location readings between 8:40 AM and 9:00 AM are not used for inference. Each cluster has a centroid that the system computes by combining the locations of this cluster. A cluster can be further segmented into a number of sub-clusters, each with a centroid that is called a sub-centroid.


After a cluster is identified from a user's location readings, the place database is queried for places nearby the cluster's centroid. This search uses a large radius in hope to mitigate the noise in location data and cover all the candidate places the user may be located. A feature generation process examines each candidate place and extracts features that characterize the place. The inference model takes the features of each candidate place and generates the probabilities of each candidate being the correct place.


To tune this inference model, a “ground truth,” or process to confirm or more accurately determine place location, is created that includes multiple mappings from location readings to places. A machine learning module uses this ground truth as training and testing data set to fine tune the model.


3. PROCESS


FIG. 4 depicts a block diagram of example components or modules in an embodiment of the analytics server 122 of the inference pipeline. As shown in FIG. 4, the analytics server 122 of the inference pipeline can include, but is not limited to, a clustering and analysis component 202, a filtering component, 204, a movement classifier component 206, segmentation component 208, a merging component 210, and a stop classifier component 212. (Use of the term “system” herein may refer to some or all of the elements of FIG. 3, or other aspects of the inference pipeline system 120.) The following describes details of each individual component.


The functional units described in this specification may be called components or modules, in order to more particularly emphasize their implementation independence. The components/modules may also be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.


Indeed, a module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network.


3.1 Clustering


The clustering and analysis module 202 takes raw location data and sensor data as input and detects when a user has visited a place, and how a user transitions from one place to another place. The clustering module may include three basic processes. First, raw location data pass through a filter to remove spiky location readings. Second, the stop and movement classifier identify user movement, and segmentation splits the location sequence into segments during which the user is believed to be stationary or stopped. Third, neighboring segments that are at the same location are merged to further reduce the effect of location noise.


3.1.1 Filtering


The filtering component 204 may filter two types of location readings: location readings with low accuracies and noisy location readings.


The first type, location readings with low accuracies, can include location data from cell tower triangulation, which may be filtered out. Any other location data with estimated accuracy worse than a threshold can be filtered out too. This accuracy estimation is reported by the mobile electronic device. As described above, accuracy estimations are measured in meters and a reasonable threshold would be 50 meters.


The second type, noisy location readings, can be locations without explicitly low accuracy estimations, but have specific attributes (e.g., unreliable, erratic, etc.). To capture spiky locations, a window size may be used to measure the deviation of a location reading. A predetermined number of location readings immediately preceding the location in question, and a predetermined number of location readings immediately after, are used to compute the average location of a neighborhood (e.g., two readings before and after). If the distance between location in question and the neighborhood average is greater than a threshold, the location in question is removed. In this case, the threshold can be different from the threshold for accuracy estimations and is used to prevent spiky location readings that are reported to be highly accurate. As an example, a WiFi location may have a high accuracy value (e.g., low number in meters), but in fact be pinpointing a place in a different country.



FIGS. 5A and 5B illustrate example inputs and outputs of a clustering 202 module or process. FIG. 5A shows the various location readings while FIG. 5B shows the final results of the clustering module 202. X-axis is time, while y-axes are latitude and longitude. The input is a sequence of location readings that can be very noisy. After running the clustering module, three clusters are identified from the straight lines shown in FIG. 5B, with centroids at coordinates <47.613, −122.333>, <47.616, −122.355>, and <47.611, −122.331> respectively. Although FIGS. 5A and 5B only show latitude and longitude, altitude and sensor data may also be taken into account in the clustering module.


3.1.2 Movement Classifier


The movement classifier component 206 can detect movement in at least two ways under the clustering module 202.


3.1.2.1 Location Trace Based


Under location trace based methods, the movement classifier component 206 uses a sliding time window that moves along a location sequence. For each window, the movement classifier component determines whether the user moved during this period. If the movement classifier component determines that the user has likely moved, the classifier splits the location sequence at the point in the window where the user speed is greatest.



FIG. 6 illustrates this location trace based method 600 of detecting movement. As illustrated in FIG. 6, each block in the five rows represents a location reading. The second and third windows 620 and 630 are classified as moving and the other windows 640, 650, 660 are classified as not moving. As the result, the location sequence is broken into two segments as shown by the row 670.


The sliding window in the example of FIG. 6 has a size of six readings. The movement classifier uses this window of six readings to determine if the user was moving. First the diameter of the bounding box of this window is computed using the minimum and maximum of latitude and longitude.

diameter=D(<latitudemin,longitudemin,>,<latitudemax,longitudemax,>)

where D is the great-circle distance between two locations on Earth.


Additionally, the speed of this window, defined below, is also computed






speed
=

diameter
duration






where duration is the length of the sliding window.


If the diameter is greater than a threshold, such as 100 meters, or if the speed is greater than a threshold (such as one meter per second), the classifier outputs true (moving); otherwise it outputs false (not moving).


3.1.2.2 Sensor Based Method


The other method uses sensors in the tracking device to detect if a user is moving. When accelerometer data is available, the movement classifier uses a similar sliding window and applies a Fourier transform to the sensor data (e.g., accelerometer data) to calculate the base frequency of user movement in each time window. Depending on this frequency, a user's movement is classified as moving or stationary.


3.1.3 Splitting Location Sequence


If the movement classifier classifies a window as moving, the classifier identifies a point of maximal speed and splits the location sequence at that point.


If the window is classified as not moving, the sliding window shifts one location reading towards the future, and the process repeats.


3.1.4 Segmentation and Centroid Computation


After the sliding window has covered all the locations of a user, the segmentation component 208 divides the whole location sequence into one or more segments. Each segment is called a location cluster. The segmentation component then calculates the centroid of a location cluster by combining the locations in this cluster.







latitude


=


1
N






i
=
1

N




w
i



latitude





i












longitude


=


1
N






i
=
1

N




w
i



longitude





i










where wi is the weight for the ith location. The weighted value depends on the source of the location. As an example, GPS locations have higher weights than other locations. In another implementation, an internet search engine may be weighted higher than a restaurant recommendation service.


3.1.5 Merging


It is possible an actual cluster is broken into several clusters, because of noisy location data. An extra step is taken or a process implemented to merge consecutive clusters if their centroids are close. The merging component 210 handles such merging. One condition, implemented by the merging component, for two neighboring clusters to merge is when the distance is below a threshold, and the start time of the later cluster is no later than a few minutes after the end of the earlier cluster.


3.1.6 Stop Classifier


The stop classifier component 212 examines the duration of each cluster, and decides if the cluster represents a user staying at a place. If the duration does not fall into a predefined range, the cluster is filtered out and not considered for next stages. An example predefined range would be any time between two minutes and three hours.


3.2 Candidate Search


After clusters are generated under the clustering module 202, the system sends search requests or queries to a place database to retrieve places close to each cluster's centroid.


The place database contains information for all the places that have been collected by the analytics server. The information includes name, address, location, category, business hours, reviews, and parcel geometries. The place database is geospatially indexed so that a search by location can be quickly performed. All places that are within a radius from the cluster centroid are returned as candidate places.


Place data is generated from multiple sources, including by not limited to, geographic information databases, government databases, review Web sites, social networks, and online maps. Places from multiple sources may be matched and merged, creating a richer representation of each physical place. By combining information from multiple sources, one can minimize noise and improve data quality. For example, averaging locations of the same place from all sources, and giving higher weights to more reliable sources can improve the estimation of the actual location of the place. The actual location of each place is provided by a place location ground truth (described later).


3.3 Feature Generation


For each cluster, the analytics server of the inference pipeline takes all of the place candidates found from the place database and ranks them by the probability that a typical user, or this particular user, would visit. Before that, features are extracted from each candidate.


Three types of features can be extracted from each place candidate:

    • 1. Cluster features are only dependent on the cluster itself, not any place candidates. A cluster feature has the same value for all the candidates of a cluster.
    • 2. Place features are place specific features. Each candidate may have different place features depending on their attributes such as locations and categories.
    • 3. User features which depend on the user's profile such as demographic attributes. Similar to cluster features, user features are the same for all the candidates of a cluster.


      3.3.1 Cluster Features


Cluster features describe properties of the cluster and can include:

    • Duration of the cluster
    • Noisiness of location readings in the cluster, measured as the average accuracy
    • Number of location readings in the cluster
    • Radius of the cluster
    • Probability of visiting different categories of places given the timestamp for the cluster or “cluster time”
    • Density of places near the cluster centroid
    • Zoning information of the cluster centroid
    • Season, weather, or temperature associated with the cluster time


      3.3.2 Place Features


Place features describe properties of a place candidate. Place features include

    • Distance to cluster centroid
    • Number of sources: The place database collects place information from a number of reverse geo-coding sources. This feature counts how many data sources provides the place
    • Low quality place location: Some data sources have low accuracy place locations. This feature examines if a place candidate only comes from individual or lower accuracy data sources.
    • Popularity: Popularity of a place can be calculated from number of times users check in to a place, number of people who connected to a place on social networks, number of comments or reviews, Wi-Fi/Bluetooth signal visibility to users' mobile electronic device, noise level, transaction volumes, sales numbers, and user visits captured by other sensors.
    • Category: If the category taxonomy is a multi-level hierarchy, more than one features can be used
    • Single user place: This feature is defined as whether a place has small number of users who ever checked in or otherwise visited this location.
    • Review count: For places that come from data sources with reviews, this feature is the total number of reviews
    • Time-category match: This feature measures the probability of a user visiting a place of a particular category, given the day or time of day. For example, at 7 AM, a user is more likely to visit a coffee shop than a night club
    • Business hours: Probability the place is open at the time of the visit.
    • Parcel distance: The distance between the centroid of the parcel and the centroid of the cluster
    • Cluster in place parcel: This feature is true when the centroid of the cluster falls into the parcel of the place


      3.3.3 User Features


User features are generated from user profile, including demographics, mobile device information, user location history and other information that helps predict where a user is visiting. Past place visits are also useful in predicting future visits. For example, if high-confidence inferences have been made in the past that a user visited a certain neighborhood grocery store every week, the user is more likely to go to this place if he/she hasn't been there for seven days.

    • User demographics, including age, gender, income level, ethnicity, education, marital status, and number of children
    • Distance to home location
    • Distance to work location
    • Commute patterns
    • Device features including manufacturer, model, service provider, and sensor availability.
    • Frequently visited places in the past.


      3.4 Inference Engine


All the features are combined as a composite feature vector and normalized to the range of zero to one. For each cluster, the inference engine examines the feature vectors of each candidate place and ranks them by the probability of being the actual place.


3.4.1 Ranking Candidates


The inference engine may use a mapping from a feature vector F to a ranking score s, such as via a linear model:











s
=





W
T


F

-
b







=







i
=
1

M




W
i



F
i



-
b










where W is the weight vector, b is the bias value, and M is the total number of candidates. The higher this ranking score, the more likely the candidate is the actually place. The weight vector is determined by the training process.


3.4.2 Probability Estimation


The inference engine then turns the score into a probability






ϕ
=

e
s








P


(

Place
j

)


=


ϕ
j





k
=
1

n



ϕ
k








With this probability, one can either take the candidate with the highest probability as the inference output, or use a list of top candidates, along with their probabilities, as the result.


3.5 Ground Truth Generation


In order to verify inference models, the system collects ground truth data. This data is considered to be accurate observations that link a user with a place at an instance of time. This data is used to train and validate the inference models downstream. Some sources of reference data include, but are not limited to, the following:


3.5.1 Place Location


Ground truth for place locations is collected by having human labelers associate places with locations on mapping tools. Each location is then converted to a latitude-longitude coordinate and stored in a database. The labelers may be tasked with identifying the place in question on an aerial map and writing down the coordinates of the center of the building and the geometry of the building. If there is more than one place sharing the same building, the best estimation of the correct corner is taken.


3.5.2 Place Confirmation


This is a place where a user checks in or confirms as the actual place associated with his or her location. If a place is not listed, the user has the ability to manually add the name of the place. A place confirmation includes data sources such as a voluntary check-in, or from sensor measurements of temperature, sound, light, proximity sensors, or near field communication (NFC) chipsets, etc.


Place confirmations are determined with a separate tool that collects ground truth data. An example of such tool is a mobile application that suggests a list of places to check-in to, given the user's location. Every time a user checks-in to a place, this is logged as a check-in observation event. When the check-in is registered, the check-in observation event is associated with a latitude and longitude and timestamp. This source of data serves as a direct validation of inference model and is used to train the model.


3.5.3 Offline Check-In


Users can check in to a place via third party location-based services. When a user registers a check-in, the analytics server requests check-in information logged by the user within that service. These check-ins are transmitted as offline check-ins. This source of data serves as reference data for the inference pipeline. By combining mobile device data and user check-in created using third-party software, the analytics server can map a user's device data to places.


3.5.4 Place Queries


A check-in service includes social networking websites that allow users to “check in” to a physical place and share their location with other members. Examples of check-in services include Facebook, Foursquare, Google Latitude, and Gowalla. Check-in services provide a list of nearby places and allow an opportunity for the user to confirm a place s/he has visited. The place data can then be stored with the check-in service provider in order to support personal analytics, sharing on social networks, earning incentives, public and private recognition, general measurement, broadcasting, and history. Every time a user queries check-in services for nearby places, the data is packaged into an observation that is sent to the analytics servers. A place query contains a time stamp, user location, and places received from each check-in service. This data helps to preserve and recover original user check-in experience for verification purpose.


3.5.5 Place Survey Answers


This is obtained from sources that include but is not limited to responses to survey questions relating to places visited by a user. Survey questions are generated by the inference pipeline and serve as a form of feedback mechanism. In one implementation, survey questions are generated automatically. After the inference pipeline predicts a user's visits within a certain time window, an algorithm evaluates the uncertainty of each inferred visit and picks highly-uncertain instances to generate survey questions. Questions are generated using various rules to improve the accuracy of an inference, and eliminate results that are highly ambiguous or have a low confidence. In turn, uncertain inferences are used to generate survey questions because the inferences tend to have low accuracies and thus, any validation of such inferences helps improve the inference engine. An example presentation of survey questions is


“Were you at one of the following places around 4 PM yesterday?”


A. Starbucks Coffee


B. McDonald's


C. Wal-Mart


D. None of the above


Answers to Survey Questions are used as training data to fine-tune the inference model, and improve the chance that the correct place is ranked as top candidate. User data from mobile devices and survey question answers are paired up to create training samples, and input to machine learning algorithm to adjust the inference model.


To detect fraud and estimate data quality, random places are included in survey questions. In one implementation, a place that is not in the vicinity of the user is added to the choices. If this place is picked, the survey question is marked as invalid. Statistics associated with fraudulent answers are used to determine the reliability of survey answers. Also, the order of answers is randomized to remove position bias.


3.5.6 Activity Journal


By combining a digital journal with a mobile device, a user can be associated with a verified location in order to produce validated place data. The device registers observations that include location data, and the digital journal enables an individual to log actual place and time data. The digital journal which includes place and time data is combined with device observation using time stamp as the “join point.” A join point can use time to correlate digital log data with device observation data. This joining of two sources of data generates reference data for the inference pipeline. For example, when a user enter information into a digital log that he was at a coffee shop at 9 AM, and when there is observation data collected at 9 AM, a join point is created for 9 AM that associates the device observations with a location of the user (i.e., coffee shop).


Journals are tabular data in any form, such as a spreadsheet or digitized paper form. Processing journals can either be a manual or automatic process. Journals can be turned into any format, such as text files or a database, as long as it can be processed by the inference pipeline.


Reference data is considered highly accurate and is designed to generate pinpoint accuracy in terms of location, place, time, and activity.

















TABLE 3












Actions at



Place Name
Mall/Area
Date
Walk-in Time
Leave Time
Entry/Exit
Path to Place
Place
Notes/Issues







General Store
Main St
Nov. 8, 2011
11:52 am
12:01 pm
South Door
Took sidewalk
Stood near
NA








from parking
entrance








lot









Table 3 provides example fields in the digital log completed by a journal-taker. The journal-taker completes each of the columns confirming the place name of a visit records any optional notes or issues. The time stamp can be automatically filled-in or manually entered. The values in these fields assign information to device observation data. The level of precision and accuracy is considered high based on the details collected by the digital logs. This data is considered high value across reference data sources.


3.5.7 Third-Party Reference Data


Data can be bulk imported into the system from third-party feeds that include questionnaires, travel logs, credit card purchase logs, mobile wallet transactions, point-of-sale (POS) transactions, bar code scanning, security card or electronic access data (e.g., access to buildings, garages, etc. using security cards/PIN codes), etc. The inference pipeline infrastructure provides interfaces to bulk import data that can augment the database with reference data.


3.5.7.1 User-Generated Contents


Validation data can be generated by associating data collected on mobile devices with user-generated contents on social networks, review sites and other types of information sharing platforms. For example, a social network post about an event directly matches a place. Similarly, user's contacts on a social network may also be used as a source of ground truth to validate location data.


3.5.7.2 Purchase Records


Purchase information identifies a place where a purchase occurred, and a time when the purchase occurred. These values combined with location data act as a form of validation and can be treated as ground truth for the inference pipeline.


Without the location data, store activity can still be used in the inference pipeline to identify that the user has visited a place, and the frequency of purchases can determine frequency of visits. This store activity acts as a behavioral roadmap of past activity as it can be indicative of future behavior in the form of places visited.


3.5.7.3 Network Data


Network data includes a network of devices that register location data at various levels of precision. Sources of network data include mobile carriers, network service providers, device service providers and the data and metadata may be collected as a byproduct of core services.


As an example a mobile carrier provides cell phone service to millions of customers via their cellular network. This network is a byproduct of providing core cell service registers location data because the mobile device is connected to an access point. Aggregating this data across all customers creates a density map of carrier activity associated with location, which the data collection system defines as Network Data. This network data can act as a proxy of baseline location activity for millions of customers. In addition, the network data may help to identify popular sites or more trafficked areas so that more accurate predictions for a user's location can be made.


The network data acting as baseline location activity enables the data collection system to identify location biases and build models to normalize against those biases. As more sources of network data are incorporated, the models become more robust and diversified, as a single source may not accurately represent a population in a given geographic area.


Network information related to WiFi base stations, such as signal strengths, frequently-used WiFi networks, may be correlated with place confirmation data in order to build a history of check-in activity or empirical data. In this implementation, WiFi network features can be used as markers to identify a place. WiFi network features include aspects such as a network name (SSID), signal strength, MAC address, IP address, or security settings. Features generated from a device that is associated with a WiFi network include names of available WiFi networks, WiFi networks a device is connected to, and names of WiFi networks that are visible to the device—all of which provide additional information about a person's location in relation to a place. These features act as markers in the inference model to determine a place.


As an example if a device is constantly connected to a certain WiFi network name over the span of a few days from 7 pm to 7 am, this WiFi network may become a marker that signifies the home of the device owner. This concept of a continuous, recurring, and/or persistent WiFi network connection applies outside of home, including work and school.


As another example, networks named after networking devices, such as “linksys” or “NETGEAR,” are more likely to be home networks than business networks. Also, a connection to a WiFi network is a strong indication that a person is located at or near a place that is frequently or commonly visited. This same logic is applied to identify businesses, where the network name might identify an actual business name providing the service (e.g., McDonald's WiFi, Starbucks, Barnes and Noble Customer WiFi, Best Western Inn Wireless).


4. IMPROVING INFERENCE MODEL WITH MACHINE LEARNING

As noted above, the inference engine may be validated with ground truth data that maps user locations to places.


4.1 Training Data


Training data contains a list of location records or labels that can be generated by users who carry the mobile electronic device with them in normal daily life. Each label consists of a timestamp, a reference to a place in the database, and the user ID. The users' mobile electronic devices are also tracking their locations. By looking up the user ID of each label, corresponding location readings can be retrieved. Using the retrieved location readings close to the label timestamp, one can go through the inference pipeline, starting from clustering, to predict the location of a user. The machine learning process uses the predicted place and actual place to tune the model.


Training data can come from either direct validation data, such as user check-in and survey questions, or indirect validation data, such as credit card transaction history. Indirect validation data can be used in the same way as direct validation data, to improve ranking of place candidates, and estimate the accuracy of an inference. Data collected on a user's mobile devices is fed to the model, and then the output of the inference model(s) is compared against the input data. Any disagreement between model output and desired result is sent back to the model to adjust parameters.


Each training sample consists of a feature vector and a label for every candidate place of a cluster. For a cluster with M candidate places, the label is represented as a probability

Py(xj),j=1, . . . ,M

where x is the jth candidate. If xj* is the correct place








P
y



(

x
j

)


=

{




1
,




j
=

j








0
,




j


j












4.2 Loss Function


The loss function is defined as the cross entropy. If the output of the model is represented as

Pz(fw)(xj)

where f is the model, and W is the model parameter (for example weight vector). The loss function is












L


(

y
,

z


(

f
W

)



)


=



-




j
=
1

M





P
y



(

x
j

)




log


(


P

z


(

f
W

)





(

x
j

)


)











=



-

log


(


P

z


(

f
W

)





(

x

j



)


)












4.3 Optimization


Gradient descent is used to optimize the model.


The gradient of the loss function is











Δ





W

=





L



W








=






L


(

y
,

z


(

f
W

)



)





W








=



-




log


(


P

z


(

f
W

)





(

x

j



)


)





W









=








f
W



(

x

j



)





W


+





k
=
1

M



exp
(



f
W



(

x
k

)








f
W



(

x
k

)





W



)






k
=
1

M



exp


(


f
W



(

x
k

)


)















Since ϕk=exp(sk) and sk=fW(xk)







Δ





W

=


-




s

j






W



+


1




k
=
1

M



ϕ
k








k
=
1

M



(


ϕ
k






s
k




W



)









More specifically







Δ






W
l


=


-

F


j


,
l



+


1




k
=
1

N



ϕ
k








k
=
1

M



(


ϕ
k



F

k
,
l



)









where Fk,l(i) is feature l of place candidate k in instance i.


The weight vector is update with this gradient

Wlt+1=Wlt+ηΔWlt

where η is the learning. Learning stops when the results converge.


4.4 Accuracy Estimation


Some clusters are harder for the inference engine to predict than other clusters because of a number of factors, such as location accuracy, the number of candidate places close to the cluster, and availability of certain place attributes. A classifier is trained to estimate of the accuracy of each inference result. This estimated accuracy is used when aggregating inference results. For example, one can apply different weights based on the estimated probability of each inference being correct.


The accuracy estimation classifier takes several features as input including the average precision of locations in the cluster, the number of place candidates nearby, whether the location is inside an indoor mall, and whether the location is close to high rise buildings or in a more remote area.


This classifier is trained against validation data. This classifier does not impact the results of the inference engine, but it assigns a confidence value to each inference instance. This value is used to weight the impact of each inference when aggregating results. Inferences with lower confidence values are weighted lower than those with higher confidence values so that the overall result is more accurate.


5. INFERENCE WITH SESSION USERS

As discussed earlier, there are two modes of data acquisition or location input: continuously-tracked and session-based. What has been described previously includes a continuously-tracked mode of data acquisition. Session-based data acquisition mode is different in at least three aspects.

    • 1. User locations for session users are segmented, whereas user locations for continuously-tracked users are non-stop. Location tracking is controlled by third-party applications or devices which integrate e.g., analytics agent. User locations are only available when a tracking session is turned on. The result is that the analytics server only receives location data during the sessions, which are typically short.
    • 2. User location data is collected at a higher frequency because battery life is less of a concern when tracking is limited to short periods of time, locations are logged with higher frequency, typically once per second. The model will have a better estimation of user movement.
    • 3. All sensors are turned on when tracking starts, including accelerometer, orientation, and gyroscope when available.


The difference between session-based location input and continuously tracked location input is addressed by modifying the inference pipeline in the following ways.


5.1 Movement Detection


When user speed is higher than a threshold, the system may skip the session and not run the inference engine. Speed is computed by dividing the diameter of the location bounding box by the length of the session.


5.2 Skip Clustering


When the user is detected to be stationary, the system may skip clustering. The entire session is considered as a cluster. The centroid of the cluster is computed the same way as a cluster in continuous tracking mode.


5.3 Additional Features


Because session users tend to have denser location readings and sensor readings, several features are added to the inference model to take advantage of additional data:

    • Application type data identifies the kind of application/device that is tracking the user
    • Orientation data of the mobile electronic device
    • Base frequency of the mobile electronic device's movement. A base frequency is computed from a Fourier transform of the accelerometer readings, and can be generated from historical data, such as using samples gathered over a day, week, month, or other timeframe. The base frequency can measure how fast a body oscillates. For example, a jogger's base frequency can be 3 Hz, while a walker's base frequency can be 1.5 Hz.


6. AGGREGATION OF INFERENCE RESULTS

After collecting information from a user's mobile device for a period of time, and running the inference engine on the data, the results can be aggregated in different ways to serve different purposes.


6.1 Personal Location Profile


By aggregating a single user's place visit data, one can generate a user location profile with the user's consent which presents useful information about the user's behavior and visit patterns. Such information may include statistics about the user's daily commute, frequently visited places, vehicle's fuel economy, and other personal location analytics data. From this data, one can propose alternative routes, offer incentives to nearby businesses, and suggest similar places frequented by the user in the same area.



FIG. 7 illustrates an example presentation of a personal location profile, which shows places visited on a weekly basis, and suggests related places. While not shown in FIG. 7, timing information or timestamps are associated with inference results to infer activities such as lunch, activities after work hours, commuting, etc. As an example, week one 710 shows a visit to McDonald's and Starbucks during lunch for a certain amount of time (e.g., an hour and a half), spending an hour each at Safeway, QFC, and Costco for groceries after work, and finally refilling on gas at a Shell before returning home. Week 2's 720 visits to Subway, Albertsons, Macy's, Costco, Shell, Starbucks, and Taco Bell and Week 3's 730 visits to Whole Foods Market, Taco Bell, Safeway, Rite Aid and Costco can also reveal similar information such as visit duration, time of visit, venue address, proximity, etc. As a result of the user's personal location profile, the system can arrive at some suggested places 740 including Wal-Mart, Trader Joe's, and Chevron.


6.2 User Location Analytics


By aggregating data of places visited across multiple users, one can produce user location analytics for third-party mobile application developers. Such analytics report may display, among others, how often users consume a mobile application at home, at work and during commute, geographical distribution of user base, user activities before, during, and after consuming the application.


7. CONCLUSION

Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, discussion of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment.


Furthermore, the described features, advantages, and characteristics of the invention may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize that the invention can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the invention.


Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” As used herein, the terms “connected,” “coupled,” or any variant thereof means any connection or coupling, either direct or indirect, between two or more elements; the coupling or connection between the elements can be physical, logical, or a combination thereof. Additionally, the words “herein,” “above,” “below,” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. Where the context permits, words in the above Detailed Description using the singular or plural number may also include the plural or singular number respectively. The word “or,” in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list.


The above Detailed Description of examples of the invention is not intended to be exhaustive or to limit the invention to the precise form disclosed above. While specific examples for the invention are described above for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. For example, while processes or blocks are presented in a given order, alternative implementations may perform routines having steps, or employ systems having blocks, in a different order, and some processes or blocks may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or subcombinations. Each of these processes or blocks may be implemented in a variety of different ways. Also, while processes or blocks are at times shown as being performed in series, these processes or blocks may instead be performed or implemented in parallel, or may be performed at different times. Further any specific numbers noted herein are only examples: alternative implementations may employ differing values or ranges.


The teachings of the invention provided herein can be applied to other systems, not necessarily the system described above. The elements and acts of the various examples described above can be combined to provide further implementations of the invention. Some alternative implementations of the invention may include not only additional elements to those implementations noted above, but also may include fewer elements.


Any patents and applications and other references noted above, including any that may be listed in accompanying filing papers, are incorporated herein by reference. Aspects of the invention can be modified, if necessary, to employ the systems, functions, and concepts of the various references described above to provide yet further implementations of the invention.


These and other changes can be made to the invention in light of the above Detailed Description. While the above description describes certain examples of the invention, and describes the best mode contemplated, no matter how detailed the above appears in text, the invention can be practiced in many ways. Details of the system may vary considerably in its specific implementation, while still being encompassed by the invention disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the invention should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the invention with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the invention to the specific examples disclosed in the specification, unless the above Detailed Description section explicitly defines such terms. Accordingly, the actual scope of the invention encompasses not only the disclosed examples, but also all equivalent ways of practicing or implementing the invention under the claims.


To reduce the number of claims, certain aspects of the invention are presented below in certain claim forms, but the applicant contemplates the various aspects of the invention in any number of claim forms. For example, while only one aspect of the invention is recited as a means-plus-function claim under 35 U.S.C. sec. 112, sixth paragraph, other aspects may likewise be embodied as a means-plus-function claim, or in other forms, such as being embodied in a computer-readable medium. (Any claims intended to be treated under 35 U.S.C. § 112, ¶6 will begin with the words “means for”, but use of the term “for” in any other context is not intended to invoke treatment under 35 U.S.C. § 112, ¶6.) Accordingly, the applicant reserves the right to pursue additional claims after filing this application to pursue such additional claim forms, in either this application or in a continuing application.

Claims
  • 1. A method for inferring a location of a user, the method comprising: receiving raw location data from a mobile device associated with a user;filtering the raw location data to generate filtered location data, wherein filtering the raw location data comprises: filtering low accuracy location data from the raw location data, wherein the low accuracy location data is identified based upon an accuracy estimate associated with one or more portions of the raw location data with respect to an indicated location, the accuracy estimate reported by the mobile device, and wherein the low accuracy location data is filtered based upon a first threshold; andfiltering noisy location data from the raw location data, wherein noisy location data is location data estimated as accurate but has a distance greater than a second threshold from an average location, the average location determined based upon a predetermined number of locations readings preceding the noisy location data and a predetermined number of location readings after the noisy location data;identifying, from the filtered location data, a stationary location of the mobile device, wherein the stationary location is associated with the mobile device being stationary for longer than a threshold time;determining multiple candidate place names that are within a predetermined radius of the stationary location, wherein determining multiple candidate place names that are within a predetermined radius of the stationary location includes querying a place name database that includes place information and corresponding geo-location data;obtaining attributes of the filtered location data and attributes of the multiple candidate place names;extracting, for each candidate place name, place features based upon attributes for an individual place name;extracting cluster features describing cluster properties, wherein the cluster features comprise radius of the cluster, number of location readings within the cluster, and noisiness of location readings in the cluster, and wherein the cluster features are applicable to the multiple candidate place names located within the cluster;extracting user features from a profile associated with the user;generating a composite feature vector based upon the place features comprising the place features describing a candidate place, the cluster features, and the user features describing the user, the user features comprising demographic attributes; andinferring, based upon the composite feature vector, one of the multiple candidate place names as a place name for the stationary location based on a comparison of the attributes.
  • 2. The method of claim 1, wherein the raw location data comprises a series of location data, the method further comprising: classifying a sliding window of N contiguous location data over the series of location data as moving or not moving, wherein N is an integer number of location data;segmenting the series of location data into two or more location clusters based on whether the sliding window is classified as moving or not moving; andidentifying a place name for each of the two or more location clusters.
  • 3. The method of claim 1, further comprising: receiving reference data associated with the user that links the user to a candidate place at an instance of time;wherein inferring one of the multiple candidate place names as a place name for the stationary location includes inferring one of the multiple candidate place names based on the received reference data.
  • 4. The method of claim 1, wherein the raw location data includes latitude and longitude coordinate data and an associated time at which the data was measured.
  • 5. The method of claim 1, wherein the raw location data is received based on continuous tracking of the user.
  • 6. The method of claim 1, wherein the raw location data is received during session-based tracking of the user.
  • 7. A system for inferring a location of a user, the system comprising: at least one processor; andmemory storing executable instructions that, when executed by the at least one processor, performs a method comprising: receiving multiple location readings, wherein each location reading is associated with a time and estimated accuracy;filtering the multiple location readings to generate filtered location readings, wherein filtering the multiple location readings comprises: filtering low accuracy location data from the multiple location readings, wherein the low accuracy location data is identified based upon an accuracy estimate associated with one or more portions of the multiple location readings, the accuracy estimate reported by the mobile device, and wherein the low accuracy location data is filtered based upon a first threshold; andfiltering noisy location readings from the multiple location readings, wherein a noisy location reading is a location reading estimated as accurate but has a distance greater than a second threshold from an average location, the average location determined based upon a predetermined number of locations readings preceding the noisy location data and a predetermined number of location readings after the noisy location data;determining a stop for the mobile device based on the filtered location readings, wherein the stop includes a stop time and a stop location;predicting a plurality of possible places associated with the determined stop location at the determined stop time;extracting an attribute of each of the possible places, wherein the attribute includes at least one of a place category or hours of operation;extracting, for each possible place, place features based upon attributes for an individual place name;extracting cluster features describing cluster properties, wherein the cluster features comprise radius of the cluster, number of location readings within the cluster, and noisiness of location readings in the cluster, and wherein the cluster features are applicable to the plurality of possible places located within the cluster;extracting user features from a profile associated with the user;generating a composite feature vector based upon the place features comprising the place features describing a candidate place, the cluster features, and the user features describing the user, the user features comprising demographic attributes; andinferring, based upon the composite feature vector, one of the multiple candidate place names as a place name for the stationary location based on a comparison of the attributes.
  • 8. The system of claim 7, wherein the stop is determined by determining the time and location of the mobile device by clustering the filtered location readings into location clusters.
  • 9. The system of claim 7, wherein the stop is determined by determining the time and location of the mobile device by clustering the filtered location readings into location clusters; and wherein the method further comprises merging neighboring location clusters when a centroid of each of the neighboring location clusters is below a centroid threshold.
  • 10. The system of claim 7, wherein the stop is determined by determining the time and location of the mobile device by clustering the filtered location readings into location clusters; and wherein the method further comprises querying a place database to retrieve the possible places that are within a radius from a centroid of each of the location clusters;wherein the place database stores locations and attributes corresponding to places.
  • 11. The system of claim 7, wherein the method further comprises calculating a probability that the user is located at each of the possible places, wherein the probability is based on a distance between reference data and each of the possible places and the extracted attribute.
  • 12. The system of claim 11, wherein the reference data links the user to a candidate place at an instance of time, and wherein the reference data is derived from at least one of: place check-in, internet search activity, social networking site activity, geotagged image, email, phone call, calendar appointment or network activity.
  • 13. The system of claim 7, wherein the stop is determined by determining the time and location of the mobile device by clustering the filtered location readings into location clusters; wherein the method further comprises computing a centroid of each of the location clusters as a weighted combination of location readings associated with the location cluster, wherein a weight of a location reading depends on the source of the location reading.
CROSS-REFERENCE TO RELATED APPLICATION

This application is a continuation of U.S. patent application Ser. No. 15/018,538, filed Feb. 8, 2016, now U.S. Pat. No. 9,723,450, granted Aug. 1, 2017, which is a continuation of U.S. patent application Ser. No. 14/300,102, filed Jun. 9, 2014, now U.S. Pat. No. 9,256,832, granted Feb. 9, 2016, which is a continuation of U.S. patent application Ser. No. 13/405,190, filed Feb. 24, 2012, now U.S. Pat. No. 8,768,876, granted Jul. 1, 2014, all of which are incorporated by reference in their entirety.

US Referenced Citations (637)
Number Name Date Kind
666223 Shedlock Jan 1901 A
4581634 Williams Apr 1986 A
4975690 Torres Dec 1990 A
5072412 Henderson, Jr. et al. Dec 1991 A
5493692 Theimer et al. Feb 1996 A
5713073 Warsta Jan 1998 A
5754939 Herz et al. May 1998 A
5855008 Goldhaber et al. Dec 1998 A
5883639 Walton et al. Mar 1999 A
5999932 Paul Dec 1999 A
6012098 Bayeh et al. Jan 2000 A
6014090 Rosen et al. Jan 2000 A
6029141 Bezos et al. Feb 2000 A
6038295 Mattes Mar 2000 A
6049711 Yehezkel et al. Apr 2000 A
6154764 Nitta et al. Nov 2000 A
6167435 Druckenmiller et al. Dec 2000 A
6204840 Petelycky et al. Mar 2001 B1
6205432 Gabbard et al. Mar 2001 B1
6216141 Straub et al. Apr 2001 B1
6285381 Sawano et al. Sep 2001 B1
6285987 Roth et al. Sep 2001 B1
6310694 Okimoto et al. Oct 2001 B1
6317789 Rakavy et al. Nov 2001 B1
6334149 Davis, Jr. et al. Dec 2001 B1
6349203 Asaoka et al. Feb 2002 B1
6353170 Eyzaguirre et al. Mar 2002 B1
6446004 Cao et al. Sep 2002 B1
6449485 Anzil Sep 2002 B1
6449657 Stanbach et al. Sep 2002 B2
6456852 Bar et al. Sep 2002 B2
6484196 Maurille Nov 2002 B1
6487601 Hubacher et al. Nov 2002 B1
6523008 Avrunin Feb 2003 B1
6542749 Tanaka et al. Apr 2003 B2
6549768 Fraccaroli Apr 2003 B1
6618593 Drutman et al. Sep 2003 B1
6622174 Ukita et al. Sep 2003 B1
6631463 Floyd et al. Oct 2003 B1
6636247 Hamzy et al. Oct 2003 B1
6636855 Holloway et al. Oct 2003 B2
6643684 Malkin et al. Nov 2003 B1
6658095 Yoakum et al. Dec 2003 B1
6665531 Soderbacka et al. Dec 2003 B1
6668173 Greene Dec 2003 B2
6684238 Dutta Jan 2004 B1
6684257 Camut et al. Jan 2004 B1
6698020 Zigmond et al. Feb 2004 B1
6700506 Winkler Mar 2004 B1
6720860 Narayanaswami Apr 2004 B1
6724403 Santoro et al. Apr 2004 B1
6757713 Ogilvie et al. Jun 2004 B1
6832222 Zimowski Dec 2004 B1
6834195 Brandenberg et al. Dec 2004 B2
6836792 Chen Dec 2004 B1
6898626 Ohashi May 2005 B2
6959324 Kubik et al. Oct 2005 B1
6970088 Kovach Nov 2005 B2
6970907 Ullmann et al. Nov 2005 B1
6980909 Root et al. Dec 2005 B2
6981040 Konig et al. Dec 2005 B1
7020494 Spriestersbach et al. Mar 2006 B2
7027124 Foote et al. Apr 2006 B2
7072963 Anderson et al. Jul 2006 B2
7085571 Kalhan et al. Aug 2006 B2
7110744 Freeny, Jr. Sep 2006 B2
7124164 Chemtob Oct 2006 B1
7149893 Leonard et al. Dec 2006 B1
7173651 Knowles Feb 2007 B1
7188143 Szeto Mar 2007 B2
7203380 Chiu et al. Apr 2007 B2
7206568 Sudit Apr 2007 B2
7227937 Yoakum et al. Jun 2007 B1
7237002 Estrada et al. Jun 2007 B1
7240089 Boudreau Jul 2007 B2
7269426 Kokkonen et al. Sep 2007 B2
7280658 Amini et al. Oct 2007 B2
7315823 Brondrup Jan 2008 B2
7349768 Bruce et al. Mar 2008 B2
7356564 Hartselle et al. Apr 2008 B2
7394345 Ehlinger et al. Jul 2008 B1
7411493 Smith Aug 2008 B2
7423580 Markhovsky et al. Sep 2008 B2
7454442 Cobleigh et al. Nov 2008 B2
7508419 Toyama et al. Mar 2009 B2
7512649 Faybishenko et al. Mar 2009 B2
7519670 Hagale et al. Apr 2009 B2
7535890 Rojas May 2009 B2
7546554 Chiu et al. Jun 2009 B2
7607096 Oreizy et al. Oct 2009 B2
7639943 Kalajan Dec 2009 B1
7650231 Gadler Jan 2010 B2
7668537 DeVries Feb 2010 B2
7770137 Forbes et al. Aug 2010 B2
7778973 Choi Aug 2010 B2
7779444 Glad Aug 2010 B2
7787886 Markhovsky et al. Aug 2010 B2
7796946 Eisenbach Sep 2010 B2
7801954 Cadiz et al. Sep 2010 B2
7856360 Kramer et al. Dec 2010 B2
7966658 Singh et al. Jun 2011 B2
8001204 Burtneret al. Aug 2011 B2
8010685 Singh et al. Aug 2011 B2
8032586 Challenger et al. Oct 2011 B2
8082255 Carlson, Jr. et al. Dec 2011 B1
8090351 Klein Jan 2012 B2
8098904 Ioffe et al. Jan 2012 B2
8099109 Altman et al. Jan 2012 B2
8112716 Kobayashi Feb 2012 B2
8131597 Hudetz Mar 2012 B2
8135166 Rhoads et al. Mar 2012 B2
8136028 Loeb et al. Mar 2012 B1
8146001 Reese Mar 2012 B1
8161115 Yamamoto Apr 2012 B2
8161417 Lee Apr 2012 B1
8195203 Tseng Jun 2012 B1
8199747 Rojas et al. Jun 2012 B2
8200247 Starenky et al. Jun 2012 B1
8208943 Petersen Jun 2012 B2
8214443 Hamburg Jul 2012 B2
8219032 Behzad Jul 2012 B2
8220034 Hahn et al. Jul 2012 B2
8229458 Busch Jul 2012 B2
8234350 Gu et al. Jul 2012 B1
8276092 Narayanan et al. Sep 2012 B1
8279319 Date Oct 2012 B2
8280406 Ziskind et al. Oct 2012 B2
8285199 Hsu et al. Oct 2012 B2
8287380 Nguyen et al. Oct 2012 B2
8296842 Singh et al. Oct 2012 B2
8301159 Hamynen et al. Oct 2012 B2
8306922 Kunal et al. Nov 2012 B1
8312086 Velusamy et al. Nov 2012 B2
8312097 Siegel et al. Nov 2012 B1
8326315 Phillips et al. Dec 2012 B2
8326327 Hymel et al. Dec 2012 B2
8332475 Rosen et al. Dec 2012 B2
8352546 Dollard Jan 2013 B1
8379130 Forutanpour et al. Feb 2013 B2
8385950 Wagner et al. Feb 2013 B1
8391808 Gonikberg et al. Mar 2013 B2
8402097 Szeto Mar 2013 B2
8405773 Hayashi et al. Mar 2013 B2
8418067 Cheng et al. Apr 2013 B2
8423409 Rao Apr 2013 B2
8471914 Sakiyama et al. Jun 2013 B2
8472935 Fujisaki Jun 2013 B1
8509761 Krinsky et al. Aug 2013 B2
8510383 Hurley et al. Aug 2013 B2
8527345 Rothschild et al. Sep 2013 B2
8532577 Behzad et al. Sep 2013 B2
8554627 Svendsen et al. Oct 2013 B2
8560612 Kilmer et al. Oct 2013 B2
8588942 Agrawal Nov 2013 B2
8594680 Ledlie et al. Nov 2013 B2
8613088 Varghese et al. Dec 2013 B2
8613089 Holloway et al. Dec 2013 B1
8660358 Bergboer et al. Feb 2014 B1
8660369 Llano et al. Feb 2014 B2
8660793 Ngo et al. Feb 2014 B2
8682350 Altman et al. Mar 2014 B2
8718333 Wolf et al. May 2014 B2
8724622 Rojas May 2014 B2
8732168 Johnson May 2014 B2
8744523 Fan et al. Jun 2014 B2
8745132 Obradovich Jun 2014 B2
8751427 Mysen Jun 2014 B1
8761800 Kuwahara Jun 2014 B2
8768876 Shim Jul 2014 B2
8775972 Spiegel Jul 2014 B2
8788680 Naik Jul 2014 B1
8790187 Walker et al. Jul 2014 B2
8797415 Arnold Aug 2014 B2
8798646 Wang et al. Aug 2014 B1
8856349 Jain et al. Oct 2014 B2
8874677 Rosen et al. Oct 2014 B2
8886227 Schmidt et al. Nov 2014 B2
8909679 Roote et al. Dec 2014 B2
8909725 Sehn Dec 2014 B1
8942953 Yuen et al. Jan 2015 B2
8948700 Behzad et al. Feb 2015 B2
8972357 Shim Mar 2015 B2
8983408 Gonikberg et al. Mar 2015 B2
8995433 Rojas Mar 2015 B2
9015285 Ebsen et al. Apr 2015 B1
9020745 Johnston et al. Apr 2015 B2
9040574 Wang et al. May 2015 B2
9055416 Rosen et al. Jun 2015 B2
9083708 Ramjee et al. Jul 2015 B2
9094137 Sehn et al. Jul 2015 B1
9100806 Rosen et al. Aug 2015 B2
9100807 Rosen et al. Aug 2015 B2
9113301 Spiegel et al. Aug 2015 B1
9119027 Sharon et al. Aug 2015 B2
9123074 Jacobs Sep 2015 B2
9143382 Bhogal et al. Sep 2015 B2
9143681 Ebsen et al. Sep 2015 B1
9152477 Campbell et al. Oct 2015 B1
9191776 Root et al. Nov 2015 B2
9204252 Root Dec 2015 B2
9225897 Sehn et al. Dec 2015 B1
9256832 Shim Feb 2016 B2
9258459 Hartley Feb 2016 B2
9344606 Hartley et al. May 2016 B2
9385983 Sehn Jul 2016 B1
9396354 Murphy et al. Jul 2016 B1
9407712 Sehn Aug 2016 B1
9407816 Sehn Aug 2016 B1
9430783 Sehn Aug 2016 B1
9439041 Parvizi et al. Sep 2016 B2
9443227 Evans et al. Sep 2016 B2
9450907 Pridmore et al. Sep 2016 B2
9459778 Hogeg et al. Oct 2016 B2
9489661 Evans et al. Nov 2016 B2
9491134 Rosen et al. Nov 2016 B2
9532171 Allen et al. Dec 2016 B2
9537811 Allen et al. Jan 2017 B2
9628950 Noeth et al. Apr 2017 B1
9710821 Heath Jul 2017 B2
9723450 Shim Aug 2017 B2
9854219 Sehn Dec 2017 B2
20020047868 Miyazawa Apr 2002 A1
20020078456 Hudson et al. Jun 2002 A1
20020087631 Sharma Jul 2002 A1
20020097257 Miller et al. Jul 2002 A1
20020122659 Mcgrath et al. Sep 2002 A1
20020128047 Gates Sep 2002 A1
20020144154 Tomkow Oct 2002 A1
20030001846 Davis et al. Jan 2003 A1
20030016247 Lai et al. Jan 2003 A1
20030017823 Mager et al. Jan 2003 A1
20030020623 Cao et al. Jan 2003 A1
20030023874 Prokupets et al. Jan 2003 A1
20030037124 Yamaura et al. Feb 2003 A1
20030052925 Daimon et al. Mar 2003 A1
20030101230 Benschoter et al. May 2003 A1
20030110503 Perkes Jun 2003 A1
20030126215 Udell Jul 2003 A1
20030148773 Spriestersbach et al. Aug 2003 A1
20030164856 Prager et al. Sep 2003 A1
20030229607 Zellweger et al. Dec 2003 A1
20040027371 Jaeger Feb 2004 A1
20040064429 Hirstius et al. Apr 2004 A1
20040078367 Anderson et al. Apr 2004 A1
20040111467 Willis Jun 2004 A1
20040116134 Maeda Jun 2004 A1
20040158739 Wakai et al. Aug 2004 A1
20040189465 Capobianco et al. Sep 2004 A1
20040203959 Coombes Oct 2004 A1
20040215625 Svendsen et al. Oct 2004 A1
20040243531 Dean Dec 2004 A1
20040243688 Wugofski Dec 2004 A1
20050021444 Bauer et al. Jan 2005 A1
20050022211 Veselov et al. Jan 2005 A1
20050048989 Jung Mar 2005 A1
20050078804 Yomoda Apr 2005 A1
20050097176 Schatz et al. May 2005 A1
20050102381 Jiang et al. May 2005 A1
20050104976 Currans May 2005 A1
20050114783 Szeto May 2005 A1
20050119936 Buchanan et al. Jun 2005 A1
20050122405 Voss et al. Jun 2005 A1
20050193340 Amburgey et al. Sep 2005 A1
20050193345 Klassen et al. Sep 2005 A1
20050198128 Anderson Sep 2005 A1
20050223066 Buchheit et al. Oct 2005 A1
20050288954 McCarthy et al. Dec 2005 A1
20060026067 Nicholas et al. Feb 2006 A1
20060107297 Toyama et al. May 2006 A1
20060114338 Rothschild Jun 2006 A1
20060119882 Harris et al. Jun 2006 A1
20060242239 Morishima et al. Oct 2006 A1
20060252438 Ansamaa et al. Nov 2006 A1
20060265417 Amato et al. Nov 2006 A1
20060270419 Crowley et al. Nov 2006 A1
20060287878 Wadhwa et al. Dec 2006 A1
20070004426 Pfleging et al. Jan 2007 A1
20070038715 Collins et al. Feb 2007 A1
20070040931 Nishizawa Feb 2007 A1
20070073517 Panje Mar 2007 A1
20070073823 Cohen et al. Mar 2007 A1
20070075898 Markhovsky et al. Apr 2007 A1
20070082707 Flynt et al. Apr 2007 A1
20070136228 Petersen Jun 2007 A1
20070192128 Celestini Aug 2007 A1
20070198340 Lucovsky et al. Aug 2007 A1
20070198495 Buron et al. Aug 2007 A1
20070208751 Cowan et al. Sep 2007 A1
20070210936 Nicholson Sep 2007 A1
20070214180 Crawford Sep 2007 A1
20070214216 Carrer et al. Sep 2007 A1
20070233556 Koningstein Oct 2007 A1
20070233801 Eren et al. Oct 2007 A1
20070233859 Zhao et al. Oct 2007 A1
20070243887 Bandhole et al. Oct 2007 A1
20070244633 Phillips et al. Oct 2007 A1
20070244750 Grannan et al. Oct 2007 A1
20070255456 Funayama Nov 2007 A1
20070281690 Altman et al. Dec 2007 A1
20080022329 Glad Jan 2008 A1
20080025701 Ikeda Jan 2008 A1
20080032703 Krumm et al. Feb 2008 A1
20080033930 Warren Feb 2008 A1
20080043041 Hedenstroem et al. Feb 2008 A2
20080049704 Witteman et al. Feb 2008 A1
20080062141 Chandhri Mar 2008 A1
20080076505 Nguyen et al. Mar 2008 A1
20080092233 Tian et al. Apr 2008 A1
20080094387 Chen Apr 2008 A1
20080104503 Beall et al. May 2008 A1
20080109844 Baldeschwieler et al. May 2008 A1
20080120409 Sun et al. May 2008 A1
20080147730 Lee et al. Jun 2008 A1
20080148150 Mall Jun 2008 A1
20080158230 Sharma et al. Jul 2008 A1
20080168033 Ott et al. Jul 2008 A1
20080168489 Schraga Jul 2008 A1
20080189177 Anderton et al. Aug 2008 A1
20080207176 Brackbill et al. Aug 2008 A1
20080208692 Garaventi et al. Aug 2008 A1
20080214210 Rasanen et al. Sep 2008 A1
20080222545 Lemay Sep 2008 A1
20080255976 Altberg et al. Oct 2008 A1
20080256446 Yamamoto Oct 2008 A1
20080256577 Funaki et al. Oct 2008 A1
20080266421 Takahata et al. Oct 2008 A1
20080270938 Carlson Oct 2008 A1
20080288338 Wiseman et al. Nov 2008 A1
20080306826 Kramer et al. Dec 2008 A1
20080313329 Wang et al. Dec 2008 A1
20080313346 Kujawa et al. Dec 2008 A1
20080318616 Chipalkatti et al. Dec 2008 A1
20090005987 Vengroff Jan 2009 A1
20090006191 Arankalle et al. Jan 2009 A1
20090006565 Velusamy et al. Jan 2009 A1
20090015703 Kim et al. Jan 2009 A1
20090024956 Kobayashi Jan 2009 A1
20090030774 Rothschild et al. Jan 2009 A1
20090030999 Gatzke et al. Jan 2009 A1
20090040324 Nonaka Feb 2009 A1
20090042588 Lottin et al. Feb 2009 A1
20090058822 Chaudhri Mar 2009 A1
20090079846 Chou Mar 2009 A1
20090089558 Bradford et al. Apr 2009 A1
20090089678 Sacco et al. Apr 2009 A1
20090089710 Wood et al. Apr 2009 A1
20090093261 Ziskind et al. Apr 2009 A1
20090132341 Klinger et al. May 2009 A1
20090132453 Hangartner et al. May 2009 A1
20090132665 Thomsen et al. May 2009 A1
20090148045 Lee et al. Jun 2009 A1
20090153492 Popp Jun 2009 A1
20090157450 Athsani et al. Jun 2009 A1
20090157752 Gonzalez Jun 2009 A1
20090160970 Fredlund et al. Jun 2009 A1
20090163182 Gatti et al. Jun 2009 A1
20090177299 Van De Sluis et al. Jul 2009 A1
20090192900 Collison Jul 2009 A1
20090199242 Johnson et al. Aug 2009 A1
20090204354 Davis et al. Aug 2009 A1
20090215469 Fisher et al. Aug 2009 A1
20090232354 Camp, Jr. et al. Sep 2009 A1
20090234815 Boerries et al. Sep 2009 A1
20090239552 Churchill et al. Sep 2009 A1
20090249222 Schmidt et al. Oct 2009 A1
20090249244 Robinson et al. Oct 2009 A1
20090265647 Martin et al. Oct 2009 A1
20090276235 Benezra et al. Nov 2009 A1
20090278738 Gopinath Nov 2009 A1
20090288022 Almstrand et al. Nov 2009 A1
20090291672 Treves et al. Nov 2009 A1
20090292608 Polachek Nov 2009 A1
20090319607 Belz et al. Dec 2009 A1
20090327073 Li Dec 2009 A1
20100041378 Aceves et al. Feb 2010 A1
20100062794 Han Mar 2010 A1
20100082427 Burgener et al. Apr 2010 A1
20100082693 Hugg et al. Apr 2010 A1
20100100568 Papin et al. Apr 2010 A1
20100113065 Narayan et al. May 2010 A1
20100130233 Lansing May 2010 A1
20100131880 Lee et al. May 2010 A1
20100131895 Wohlert May 2010 A1
20100153144 Miller et al. Jun 2010 A1
20100159944 Pascal et al. Jun 2010 A1
20100161658 Hamynen et al. Jun 2010 A1
20100161720 Colligan et al. Jun 2010 A1
20100161831 Haas et al. Jun 2010 A1
20100162149 Sheleheda et al. Jun 2010 A1
20100183280 Beauregard et al. Jul 2010 A1
20100185552 Deluca et al. Jul 2010 A1
20100185665 Horn et al. Jul 2010 A1
20100191631 Weidmann Jul 2010 A1
20100197318 Petersen et al. Aug 2010 A1
20100197319 Petersen et al. Aug 2010 A1
20100198683 Aarabi Aug 2010 A1
20100198694 Muthukrishnan Aug 2010 A1
20100198826 Petersen et al. Aug 2010 A1
20100198828 Petersen et al. Aug 2010 A1
20100198862 Jennings et al. Aug 2010 A1
20100198870 Petersen et al. Aug 2010 A1
20100198917 Petersen et al. Aug 2010 A1
20100201482 Robertson et al. Aug 2010 A1
20100201536 Robertson et al. Aug 2010 A1
20100211425 Govindarajan Aug 2010 A1
20100214436 Kim et al. Aug 2010 A1
20100223128 Dukellis et al. Sep 2010 A1
20100223343 Bosan et al. Sep 2010 A1
20100223346 Dragt Sep 2010 A1
20100250109 Johnston et al. Sep 2010 A1
20100257036 Khojastepour et al. Oct 2010 A1
20100257196 Waters et al. Oct 2010 A1
20100259386 Holley et al. Oct 2010 A1
20100273509 Sweeney et al. Oct 2010 A1
20100281045 Dean Nov 2010 A1
20100306669 Della Pasqua Dec 2010 A1
20110004071 Faiola et al. Jan 2011 A1
20110010205 Richards Jan 2011 A1
20110029512 Folgner et al. Feb 2011 A1
20110040783 Uemichi et al. Feb 2011 A1
20110040804 Peirce et al. Feb 2011 A1
20110050909 Ellenby et al. Mar 2011 A1
20110050915 Wang et al. Mar 2011 A1
20110064388 Brown et al. Mar 2011 A1
20110066743 Hurley et al. Mar 2011 A1
20110076653 Culligan et al. Mar 2011 A1
20110083101 Sharon et al. Apr 2011 A1
20110099046 Weiss et al. Apr 2011 A1
20110099047 Weiss Apr 2011 A1
20110099048 Weiss et al. Apr 2011 A1
20110102630 Rukes May 2011 A1
20110119133 Igelman et al. May 2011 A1
20110137881 Cheng et al. Jun 2011 A1
20110145564 Moshir et al. Jun 2011 A1
20110159890 Fortescue et al. Jun 2011 A1
20110164163 Bilbrey et al. Jul 2011 A1
20110197194 D'Angelo et al. Aug 2011 A1
20110202598 Evans et al. Aug 2011 A1
20110202968 Nurmi Aug 2011 A1
20110211534 Schmidt et al. Sep 2011 A1
20110213845 Logan et al. Sep 2011 A1
20110215903 Yang et al. Sep 2011 A1
20110215966 Kim et al. Sep 2011 A1
20110225048 Nair Sep 2011 A1
20110238763 Shin et al. Sep 2011 A1
20110255736 Thompson et al. Oct 2011 A1
20110273575 Lee Nov 2011 A1
20110282799 Huston Nov 2011 A1
20110283188 Farrenkopf Nov 2011 A1
20110314419 Dunn et al. Dec 2011 A1
20110320373 Lee et al. Dec 2011 A1
20120028659 Whitney et al. Feb 2012 A1
20120033718 Kauffman et al. Feb 2012 A1
20120036015 Sheikh Feb 2012 A1
20120036443 Ohmori et al. Feb 2012 A1
20120054797 Skog et al. Mar 2012 A1
20120059722 Rao Mar 2012 A1
20120062805 Candelore Mar 2012 A1
20120084731 Filman et al. Apr 2012 A1
20120084835 Thomas et al. Apr 2012 A1
20120099800 Llano et al. Apr 2012 A1
20120100869 Liang Apr 2012 A1
20120108293 Law et al. May 2012 A1
20120110096 Smarr et al. May 2012 A1
20120113143 Adhikari et al. May 2012 A1
20120113272 Hata May 2012 A1
20120123830 Svendsen et al. May 2012 A1
20120123871 Svendsen et al. May 2012 A1
20120123875 Svendsen et al. May 2012 A1
20120124126 Alcazar et al. May 2012 A1
20120124176 Curtis et al. May 2012 A1
20120124458 Cruzada May 2012 A1
20120131507 Sparandara et al. May 2012 A1
20120131512 Takeuchi et al. May 2012 A1
20120143760 Abulafia et al. Jun 2012 A1
20120150978 Monaco Jun 2012 A1
20120165100 Lalancette et al. Jun 2012 A1
20120166971 Sachson et al. Jun 2012 A1
20120169855 Oh Jul 2012 A1
20120172062 Altman et al. Jul 2012 A1
20120173991 Roberts et al. Jul 2012 A1
20120176401 Hayward et al. Jul 2012 A1
20120184248 Speede Jul 2012 A1
20120197724 Kendall Aug 2012 A1
20120200743 Blanchflower et al. Aug 2012 A1
20120209924 Evans et al. Aug 2012 A1
20120210244 De Francisco Lopez et al. Aug 2012 A1
20120212632 Mate et al. Aug 2012 A1
20120220264 Kawabata Aug 2012 A1
20120226748 Bosworth et al. Sep 2012 A1
20120233000 Fisher et al. Sep 2012 A1
20120236162 Imamura Sep 2012 A1
20120239761 Linner et al. Sep 2012 A1
20120246004 Book Sep 2012 A1
20120250951 Chen Oct 2012 A1
20120252418 Kandekar et al. Oct 2012 A1
20120254325 Majeti et al. Oct 2012 A1
20120264446 Xie et al. Oct 2012 A1
20120278387 Garcia et al. Nov 2012 A1
20120278692 Shi Nov 2012 A1
20120290637 Perantatos et al. Nov 2012 A1
20120299954 Wada et al. Nov 2012 A1
20120304052 Tanaka et al. Nov 2012 A1
20120304080 Wormald et al. Nov 2012 A1
20120307096 Bray et al. Dec 2012 A1
20120307112 Kunishige et al. Dec 2012 A1
20120319904 Lee et al. Dec 2012 A1
20120323933 He et al. Dec 2012 A1
20120324018 Metcalf et al. Dec 2012 A1
20130006759 Srivastava et al. Jan 2013 A1
20130024757 Doll et al. Jan 2013 A1
20130036364 Johnson Feb 2013 A1
20130045753 Obermeyer et al. Feb 2013 A1
20130050260 Reitan Feb 2013 A1
20130055083 Fino Feb 2013 A1
20130057587 Leonard et al. Mar 2013 A1
20130059607 Herz et al. Mar 2013 A1
20130060690 Oskolkov et al. Mar 2013 A1
20130063369 Malhotra et al. Mar 2013 A1
20130067027 Song et al. Mar 2013 A1
20130071093 Hanks et al. Mar 2013 A1
20130080254 Thramann Mar 2013 A1
20130085790 Palmer et al. Apr 2013 A1
20130086072 Peng et al. Apr 2013 A1
20130090171 Holton et al. Apr 2013 A1
20130095857 Garcia et al. Apr 2013 A1
20130104053 Thornton et al. Apr 2013 A1
20130110885 Brundrett, III May 2013 A1
20130111514 Slavin et al. May 2013 A1
20130128059 Kristensson May 2013 A1
20130129252 Lauper et al. May 2013 A1
20130132477 Bosworth et al. May 2013 A1
20130145286 Feng et al. Jun 2013 A1
20130159110 Rajaram et al. Jun 2013 A1
20130159919 Leydon Jun 2013 A1
20130169822 Zhu et al. Jul 2013 A1
20130173729 Starenky et al. Jul 2013 A1
20130182133 Tanabe Jul 2013 A1
20130185131 Sinha et al. Jul 2013 A1
20130191198 Carlson et al. Jul 2013 A1
20130194301 Robbins et al. Aug 2013 A1
20130198176 Kim Aug 2013 A1
20130218965 Abrol et al. Aug 2013 A1
20130218968 Mcevilly et al. Aug 2013 A1
20130222323 Mckenzie Aug 2013 A1
20130225202 Shim et al. Aug 2013 A1
20130226857 Shim et al. Aug 2013 A1
20130227476 Frey Aug 2013 A1
20130232194 Knapp et al. Sep 2013 A1
20130254227 Shim et al. Sep 2013 A1
20130263031 Oshiro et al. Oct 2013 A1
20130265450 Barnes, Jr. Oct 2013 A1
20130267253 Case et al. Oct 2013 A1
20130275505 Gauglitz et al. Oct 2013 A1
20130290443 Collins et al. Oct 2013 A1
20130304646 De Geer Nov 2013 A1
20130311255 Cummins et al. Nov 2013 A1
20130325964 Berberat Dec 2013 A1
20130344896 Kirmse et al. Dec 2013 A1
20130346869 Asver et al. Dec 2013 A1
20130346877 Borovoy et al. Dec 2013 A1
20140006129 Heath Jan 2014 A1
20140011538 Mulcahy et al. Jan 2014 A1
20140019264 Wachman et al. Jan 2014 A1
20140032682 Prado et al. Jan 2014 A1
20140043204 Basnayake et al. Feb 2014 A1
20140045530 Gordon et al. Feb 2014 A1
20140047016 Rao Feb 2014 A1
20140047045 Baldwin et al. Feb 2014 A1
20140047335 Lewis et al. Feb 2014 A1
20140049652 Moon et al. Feb 2014 A1
20140052485 Shidfar Feb 2014 A1
20140052633 Gandhi Feb 2014 A1
20140057660 Wager Feb 2014 A1
20140082651 Sharifi Mar 2014 A1
20140092130 Anderson et al. Apr 2014 A1
20140096029 Schultz Apr 2014 A1
20140114565 Aziz et al. Apr 2014 A1
20140122658 Haeger et al. May 2014 A1
20140122787 Shalvi et al. May 2014 A1
20140129953 Spiegel May 2014 A1
20140143143 Fasoli et al. May 2014 A1
20140149519 Redfern et al. May 2014 A1
20140155102 Cooper et al. Jun 2014 A1
20140173424 Hogeg et al. Jun 2014 A1
20140173457 Wang et al. Jun 2014 A1
20140189592 Benchenaa et al. Jul 2014 A1
20140207679 Cho Jul 2014 A1
20140214471 Schreiner, III Jul 2014 A1
20140222564 Kranendonk et al. Aug 2014 A1
20140258405 Perkin Sep 2014 A1
20140265359 Cheng et al. Sep 2014 A1
20140266703 Dalley, Jr. et al. Sep 2014 A1
20140279061 Elimeliah et al. Sep 2014 A1
20140279436 Dorsey et al. Sep 2014 A1
20140279540 Jackson Sep 2014 A1
20140280537 Pridmore et al. Sep 2014 A1
20140282096 Rubinstein et al. Sep 2014 A1
20140287779 O'keefe et al. Sep 2014 A1
20140289833 Briceno Sep 2014 A1
20140306986 Gottesman et al. Oct 2014 A1
20140317302 Naik Oct 2014 A1
20140324627 Haver et al. Oct 2014 A1
20140324629 Jacobs Oct 2014 A1
20140325383 Brown et al. Oct 2014 A1
20150020086 Chen et al. Jan 2015 A1
20150046278 Pei et al. Feb 2015 A1
20150071619 Brough Mar 2015 A1
20150087263 Branscomb et al. Mar 2015 A1
20150088622 Ganschow et al. Mar 2015 A1
20150095020 Leydon Apr 2015 A1
20150096042 Mizrachi Apr 2015 A1
20150116529 Wu et al. Apr 2015 A1
20150169827 Laborde Jun 2015 A1
20150172534 Miyakawa et al. Jun 2015 A1
20150178260 Brunson Jun 2015 A1
20150222814 Li et al. Aug 2015 A1
20150261917 Smith Sep 2015 A1
20150312184 Langholz et al. Oct 2015 A1
20150350136 Flynn, III et al. Dec 2015 A1
20150365795 Allen et al. Dec 2015 A1
20150378502 Hu et al. Dec 2015 A1
20160006927 Sehn Jan 2016 A1
20160014063 Hogeg et al. Jan 2016 A1
20160085773 Chang et al. Mar 2016 A1
20160085863 Allen et al. Mar 2016 A1
20160099901 Allen et al. Apr 2016 A1
20160180887 Sehn Jun 2016 A1
20160182422 Sehn et al. Jun 2016 A1
20160182875 Sehn Jun 2016 A1
20160232571 Moshfeghi Aug 2016 A1
20160239248 Sehn Aug 2016 A1
20160277419 Allen et al. Sep 2016 A1
20160321708 Sehn Nov 2016 A1
20170006094 Abou Mahmoud et al. Jan 2017 A1
20170061308 Chen et al. Mar 2017 A1
20170287006 Azmoodeh et al. Oct 2017 A1
Foreign Referenced Citations (31)
Number Date Country
2887596 Jul 2015 CA
2051480 Apr 2009 EP
2151797 Feb 2010 EP
2399928 Sep 2004 GB
19990073076 Oct 1999 KR
20010078417 Aug 2001 KR
WO-1996024213 Aug 1996 WO
WO-1999063453 Dec 1999 WO
WO-2000058882 Oct 2000 WO
WO-2001029642 Apr 2001 WO
WO-2001050703 Jul 2001 WO
WO-2006118755 Nov 2006 WO
WO-2007092668 Aug 2007 WO
WO-2009043020 Apr 2009 WO
WO-2011040821 Apr 2011 WO
WO-2011119407 Sep 2011 WO
WO-2013008238 Jan 2013 WO
WO-2013045753 Apr 2013 WO
WO-2014006129 Jan 2014 WO
WO-2014068573 May 2014 WO
WO-2014115136 Jul 2014 WO
WO-2014194262 Dec 2014 WO
WO-2015192026 Dec 2015 WO
WO-2016044424 Mar 2016 WO
WO-2016054562 Apr 2016 WO
WO-2016065131 Apr 2016 WO
WO-2016100318 Jun 2016 WO
WO-2016100318 Jun 2016 WO
WO-2016100342 Jun 2016 WO
WO-2016149594 Sep 2016 WO
WO-2016179166 Nov 2016 WO
Non-Patent Literature Citations (51)
Entry
Meneses F., Moreira A. (2009) Building a Personal Symbolic Space Model from GSM CellID Positioning Data. In: Bonnin JM., Giannelli C., Magedanz T. (eds) MOBILWARE 2009. Lecture Notes for ICSSITE vol. 7. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01802-2_23 (Year: 2009).
John Calfee, Clifford Winston, Randolph Stempski; Econometric Issues in Estimating Consumer Preferences from Stated Preference Data: A Case Study of the Value of Automobile Travel Time. The Review of Economics and Statistics 2001; 83 (4): 699-707. doi: https://doi.org/10.1162/003465301753237777 (Year: 2001).
Kohei Tanaka, Yasue Kishino, Tsutomu Terada, and Shojiro Nishio. 2009. A destination prediction method using driving contexts and trajectory for car navigation systems.SAC '09. Association for Computing Machinery, New York, NY, USA, 190-195. DOI:https://doi.org/10.1145/1529282.1529323 (Year: 2009).
Froehlich, Jon et al. “Route Prediction from Trip Observations” University of Washington [Published 2008] [Retrieved Jun. 2021] <URL: https://makeabilitylab.cs.washington.edu/media/publications/Route_Prediction_from_Trip_Observations_p1hrOb7.pdf> (Year: 2008).
Andrew Kirmse, Tushar Udeshi, Pablo Bellver, and Jim Shuma. 2011. Extracting patterns from location history. In <i>Proceedings of the 19th ACM Sigspatialgis . Association for Computing Machinery, New York, NY, USA, DOI:https://doi.org/10.1145/2093973.2094032 (Year: 2011).
Jong Hee Kang, William Welbourne, Benjamin Stewart, and Gaetano Borriello. 2005. Extracting places from traces of locations. <i> SIGMOBILE Mob. Comput. Commun. Rev .</i> 9, 3 (Jul. 2005), 58-68. DOI:https://doi.org/10.1145/1094549.1094558 (Year: 2005).
Ling Chen, Mingqi Lv, Gencai Chen, A system for destination and future route prediction based on trajectory mining, ISSN 1574-1192,https://doi.org/10.1016/j.pmcj.2010.08.004.https://www.sciencedirect.com/science/article/pii/S1574119210000805) (Year: 2010).
Chang, J., & Sun, E. (2011). Location3: How users share and respond to location-based data on social. In Proceedings of the International AAAI Conference on Web and Social Media (vol. 5, No. 1, pp. 74-80). (Year: 2011).
“A Whole New Story”, Snap, Inc., URL: https://www.snap.com/en-US/news/, (2017), 13 pgs.
“Adding photos to your listing”, eBay, URL: http://pages.ebay.com/help/sell/pictures.html, (accessed May 24, 2017), 4 pgs.
“BlogStomp”, StompSoftware, URL: http://stompsoftware.com/blogstomp, (accessed May 24, 2017), 12 pgs.
“Cup Magic Starbucks Holiday Red Cups come to life with AR app”, Blast Radius, URL: http://www.blastradius.com/work/cup-magic, (2016), 7 pgs.
“Daily App: InstaPlace (iOS/Android): Give Pictures a Sense of Place”, TechPP, URL: http://techpp.com/2013/02/15/instaplace-app-review, (2013), 13 pgs.
“InstaPlace Photo App Tell the Whole Story”, URL: https://youtu.be/uF_gFkg1hBM, (Nov. 8, 2013), 113 pgs.
“International Application No. PCT/US2015/037251, International Search Report dated Sep. 29, 2015”, 2 pgs.
“Introducing Snapchat Stories”, URL: https://www.youtube.com/watch?v=88Cu3yN-LIM, (Oct. 3, 2013), 92 pgs.
“Macy's Believe-o-Magic”, URL: https://www.youtube.com/watch?v=xvzRXy3J0Z0, (Nov. 7, 2011), 102 pgs.
“Macys Introduces Augmented Reality Experience in Stores across Country as Part of Its 2011 Believe Campaign”, Business Wire, URL: https://www.businesswire.com/news/home/20111102006759/en/Macys-Introduces-Augmented-Reality-Experience-Stores-Country, (Nov. 2, 2011), 6 pgs.
“Starbucks Cup Magic”, URL: https://www.youtube.com/watch?v=RWwQXi9RG0w, (Nov. 8, 2011), 87 pgs.
“Starbucks Cup Magic for Valentine's Day”, URL: https://www.youtube.com/watch?v=8nvq0zjq10w, (Feb. 6, 2012), 88 pgs.
“Starbucks Holiday Red Cups Come to Life, Signaling the Return of the Merriest Season”, Business Wire, URL: http://www.businesswire.com/news/home/20111115005744/en/2479513/Starbucks-Holiday-Red-Cups-Life-Signaling-Return, (Nov. 15, 2011), 5 pgs.
Carthy, Roi, “Dear All Photo Apps: Mobli Just Won Filters”, URL: https://techcrunch.com/2011/09/08/mobli-filters, (Sep. 8, 2011), 10 pgs.
Janthong, Isaranu, “Instaplace ready on Android Google Play store”, Android App Review Thailand, URL: http://www.android-free-app-review.com/2013/01/instaplace-android-google-play-store.html, (Jan. 23, 2013), 9 pgs.
Macleod, Duncan, “Macys Believe-o-Magic App”, URL: http://theinspirationroom.com/daily/2011/macys-believe-o-magic-app, (Nov. 14, 2011), 10 pgs.
Macleod, Duncan, “Starbucks Cup Magic Lets Merry”, URL: http://theinspirationroom.com/daily/2011/starbucks-cup-magic, (Nov. 12, 2011), 8 pgs.
Notopoulos, Katie, “A Guide to the New Snapchat Filters and Big Fonts”, URL: https://www.buzzfeed.com/katienotopoulos/a-guide-to-the-new-snapchat-filters-and-big-fonts?utm_term=.bkQ9qVZWe#.nv58YXpkV, (Dec. 22, 2013), 13 pgs.
Panzarino, Matthew, “Snapchat Adds Filters, A Replay Function and for Whatever Reason, Time, Temperature and Speed Overlays”, URL: https://techcrunch.com/2013/12/20/snapchat-adds-filters-new-font-and-for-some-reason-time-temperature-and-speed-overlays/, (Dec. 20, 2013), 12 pgs.
Tripathi, Rohit, “Watermark Images in PHP and Save File on Server”, URL: http://code.rohitink.com/2012/12/28/watermark-images-in-php-and-save-file-on-server, (Dec. 28, 2012), 4 pgs.
“U.S. Appl. No. 13/405,190, 312 Amendment filed May 9, 2014”, 10 pgs.
“U.S. Appl. No. 13/405,190, Non Final Office Action dated Feb. 25, 2014”, 8 pgs.
“U.S. Appl. No. 13/405,190, Notice of Allowance dated Apr. 23, 2014”, 11 pgs.
“U.S. Appl. No. 13/405,190, Preliminary Amendment filed Sep. 7, 2012”, 3 pgs.
“U.S. Appl. No. 13/405,190, Response filed Mar. 27, 2014 to Non Final Office Action dated Feb. 25, 2014”, 11 pgs.
“U.S. Appl. No. 14/300,102, Non Final Office Action dated Apr. 7, 2015”, 14 pgs.
“U.S. Appl. No. 14/300,102, Non Final Office Action dated Dec. 4, 2014”, 14 pgs.
“U.S. Appl. No. 14/300,102, Notice of Allowance dated Oct. 19, 2015”, 6 pgs.
“U.S. Appl. No. 14/300,102, Preliminary Amendment filed Jun. 27, 2014”, 9 pgs.
“U.S. Appl. No. 14/300,102, Response filed Mar. 4, 2015 to Non Final Office Action dated Dec. 4, 2014”, 10 pgs.
“U.S. Appl. No. 14/300,102, Response filed Oct. 7, 2015 to Non Final Office Action dated Apr. 7, 2015”, 10 pgs.
“U.S. Appl. No. 15/018,538, Final Office Action dated Dec. 1, 2016”, 8 pgs.
“U.S. Appl. No. 15/018,538, Non Final Office Action dated Jul. 14, 2016”, 13 pgs.
“U.S. Appl. No. 15/018,538, Notice of Allowance dated Mar. 24, 2017”, 6 pgs.
“U.S. Appl. No. 15/018,538, Response filed Mar. 1, 2017 to Final Office Action dated Dec. 1, 2016”, 10 pgs.
“U.S. Appl. No. 15/018,538, Response filed Oct. 14, 2016 to Non Final Office Action dated Jul. 14, 2016”, 9 pgs.
Chen, Ruizhi, et al., “Development of a contextual thinking engine in mobile devices”, Ubiquitous Positioning Indoor Navigation and Location Based Service (UPINLBS), (2014), 90-96.
Dashti, Marzieh, et al., “Detecting co-located mobile users”, IEEE International Conference on Communications (ICC) Year, (2015), 1565-1570.
Gregorich, et al., “Verification of AIRS boresight Accuracy Using Coastline Detection”, IEEE Transactions on Geoscience and Remote Sensing, IEEE Transactions on Year: 2003, vol. 41, Issue: 2, DOI: 10.11 09/TGRS.2002.808311 Referenced in: IEEE Journals & Magazines, (2003), 1-5.
Kun, Hsu-Yang, et al., “Using RFID Technology and SOA with 4D Escape Route”, Wireless Communications, Networking and Mobile Computing, WiCOM '08. 4th International Conference on Year: 2008 DOI: 10.11 09/WiCom.2008.3030 Referenced in: IEEE Conference Publications, (2008), 1-4.
Mostafa, Elhamshary, et al., “A Fine-grained Indoor Location-based Social Network”, IEEE Transactions on Mobile Computing, vol. PP, Issue: 99, (2016), 12 pgs.
Roth, John D, et al., “On mobile positioning via Cellular Synchronization Assisted Refinement (CeSAR) in L TE and GSM networks”, 9th International Conference on Signal Processing and Communication Systems (ICSPCS), (2015).
Xia, Ning, et al., “GeoEcho: Inferring User Interests from Geotag Reports in Network Traffic”, Web Intelligence (WI) and Intelligent Agent Technologies (IAT), IEEE/WIC/ACM International Joint Conferences, vol. 2 DOI: 10.1109/WI-IAT.2014.73 Referenced in: IEEE Conference Publications, (2014), 1-8.
Related Publications (1)
Number Date Country
20170332205 A1 Nov 2017 US
Continuations (3)
Number Date Country
Parent 15018538 Feb 2016 US
Child 15664496 US
Parent 14300102 Jun 2014 US
Child 15018538 US
Parent 13405190 Feb 2012 US
Child 14300102 US