Each of the following applications are hereby incorporated by reference: application Ser. No. 15/085,994 filed on Mar. 30, 2016; application Ser. No. 13/843,796 filed on Mar. 15, 2013. The Applicant hereby rescinds any disclaimer of claim scope in the parent application(s) or the prosecution history thereof and advises the USPTO that the claims in this application may be broader than any claim in the parent application(s).
Mobile devices are moving towards having access to larger and larger amounts and varying types of personalized information, either stored on the device itself or accessible to the device over a network (i.e., in the cloud). This enables the owner/user of such a device to store and subsequently easily access this information about their lives. The information of use to the user of a mobile device may include their personal calendar (i.e., stored in a calendar application), their e-mail, mapping information (e.g., locations of user-entered locations, user-requested routes, etc.).
However, at the moment, these devices require users to specifically request information in order for the device to present the information. For instance, if a user wants a route to a particular destination, the user must enter information into the mobile device (e.g., via a touchscreen, voice input, etc.) requesting the route. Given the amount of data accessible to the mobile devices, a device that leverages this data in order to predict the information needed by a user would be useful.
Some embodiments of the invention provide a mobile device with a novel route prediction engine that (1) can formulate predictions about current or future destinations and/or routes to such destinations for the device's user, and (2) can relay information to the user about these predictions. In some embodiments, this engine includes a machine-learning engine that facilitates the formulation of predicted future destinations and/or future routes to destinations based on stored, user-specific data.
The user-specific data is different in different embodiments. In some embodiments, the stored, user-specific data includes data about any combination of the following: (1) previous destinations traveled to by the user, (2) previous routes taken by the user, (3) locations of calendared events in the user's calendar, (4) locations of events for which the user has electronic tickets, and (5) addresses parsed from recent e-mails and/or messages sent to the user. The device's prediction engine only relies on user-specific data stored on the device in some embodiments, relies only on user-specific data stored outside of the device by external devices/servers in other embodiments, and relies on user-specific data stored both by the device and by other devices/servers in other embodiments.
The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description and the Drawings is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawings, but rather are to be defined by the appended claims, because the claimed subject matters can be embodied in other specific forms without departing from the spirit of the subject matters.
The novel features as described here are set forth in the appended claims. However, for purposes of explanation, several embodiments are set forth in the following figures.
In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.
The route prediction engine 105 periodically performs automated processes that formulate predictions about current or future destinations of the device and/or formulate routes to such destinations for the device based on, e.g., the information stored in the databases 115 and 120. Based on these formulations, this engine then directs other modules of the device to relay relevant information to the user.
In different embodiments, the route prediction engine 105 performs its automated processes with different frequencies. For instance, to identify possible patterns of travel, it runs these processes once a day in some embodiments, several times a day in other embodiments, several times an hour in yet other embodiments, and several times a minute in still other embodiments. In addition, some embodiments allow the user of the device to configure how often or how aggressively the route prediction engine should perform its automated processes.
The system clock 125 specifies the time and date at any given moment, while the location identification engine 110 specifies the current location of the device. Different embodiments use different location identification engines. In some embodiments, this engine includes a global positioning system (GPS) engine that uses GPS data to identify the current location of the user. In some of these embodiments, this engine augments the GPS data with other terrestrial tracking data, such as triangulated cellular tower data, triangulated radio tower data, and correlations to known access points (e.g., cell-ID, Wi-Fi ID/network ID), in order to improve the accuracy of the identified location. In addition, some embodiments use one or more of the types of terrestrial tracking data without GPS data.
Based on the location of the device and the time and date information, the route prediction engine 105 determines when it should perform its processes, and what destinations and routes to predict. To formulate the predicted destinations and/or predicted routes, the prediction engine also uses previous location data that it retrieves from the route history database 115 and other secondary databases 120.
In some embodiments, the route history data storage 115 stores previous destinations that the device recorded for previous routes taken by the device. Also, this data storage in some embodiments stores location and motion histories regarding the routes taken by the device, including identification of locations at which user travel ended. Alternatively, or conjunctively, this storage stores in some embodiments other route data (e.g., routes that are each specified in terms of a set of travel directions). The secondary data storages 120 store additional locations that the route prediction engine 105 can use to augment the set of possible destinations for the device. Examples of such additional locations include addresses extracted from recent electronic messages (e.g., e-mails or text messages), locations of calendared events, locations of future events for which the device has stored electronic tickets, etc.
In some embodiments, the route prediction engine includes a machine-learning engine that facilitates the formulation of predicted future destinations and/or future routes to destinations based on the destination, location, motion histories that it retrieves from the route history data storage 115 and the secondary data storages 120. In some embodiments, this machine-learning engine is used by additional modules of the route prediction engine, including a predicted destination generator and a predicted route generator.
The predicted destination generator uses the machine-learning engine to create a more complete set of possible destinations for a particular time interval on a particular day. Various machine-learning engines can be used to perform such a task. In general, machine-learning engines can formulate a solution to a problem expressed in terms of a set of known input variables and a set of unknown output variables by taking previously solved problems (expressed in terms of known input and output variables) and extrapolating values for the unknown output variables of the current problem. In the case of generating a more complete set of possible destinations for a particular time interval on a particular day, the machine-learning engine is used to filter the route destinations stored in the route history data storage 115 and augment this set based on locations specified in the secondary data storages 120.
The predicted route generator then uses the machine-learning engine to create sets of associated destinations, with each set specified as a possible route for traveling. In some embodiments, each set of associated destinations includes start and end locations (which are typically called route destinations or endpoints), a number of locations in between the start and end locations, and a number of motion records specifying rate of travel (e.g., between the locations). In some embodiments, the predicted route generator uses the machine-learning engine to stitch together previously unrelated destination, location, and motion records into contiguous sets that specify potential routes. Also, in some embodiments, the route generator provides each of the sets of records that it generates to an external routing engine outside of the device (e.g., to a mapping service communicatively coupled to the device through a wireless network), so that the external routing service can generate the route for each generated set of associated records. One such approach will further described below by reference to
Once the route prediction engine 105 has one or more specified predicted routes, it supplies its set of predicted routes and/or metadata information about these routes to one or more output engines 130. The output engines then relay one or more relevant pieces of information to the user of the device based on the data supplied by the routing engine. Examples of such output include: (1) displaying potential routes on a map, (2) dynamically updating potential routes on the map, (3) dynamically sorting and updating suggestions of possible destinations to search or to which to provide a route, (4) dynamically providing updates to potential routes in the notification center that a user may manually access, (5) providing automatic prompts regarding current or future routes, etc.
Several such output presentations will now be described by reference to
The interface of some embodiments with the vehicle enables the device to provide a safe driving experience. In addition to the device performing certain tasks and presenting certain content (e.g., predicted destination and/or routes) automatically, without the need for user input, some embodiments additionally allow user input through voice controls, touch screen controls, and/or physical controls mounted on the dashboard or steering wheel, among other possibilities. Also, certain applications and/or tasks may be automatically invoked based on sensors of the electronic device or information provided to the electronic device by other devices, such as systems of the vehicle (through the vehicle-device interface). For example, route prediction) or other) tasks may be automatically invoked when the vehicle is started, or based on a current location as provided by a GPS or other location indication sensor.
This figure illustrates the operation of the map application in two stages 205 and 210 that correspond to two different instances in time during a trip from the user's home to the home of the user's aunt. Specifically, the first stage 205 corresponds to the start of the trip when the user has left his home 220. At this stage along the trip, the mobile device's prediction engine has not yet predicted the destination or the route to the destination. Accordingly, the map application provides a display presentation 215 that simply shows the location of the device along the road being traveled by the vehicle. This presentation also provides on its left side the identity of the road that is being currently traversed, which in this example is I-280 North. In this embodiment, this presentation 215 is simple and does not have a supplementary audio component because this presentation is not meant to distract the user as the user has not affirmatively requested a route to be identified or navigated.
As the vehicle continues along its path, the device's route prediction engine at some point identifies a predicted destination for the journey and a route to this destination. In the example illustrated in
In some embodiments, the route prediction engine of some embodiments begins attempting to predict a destination for the device once determining that the device is in transit and therefore might want a destination. Different embodiments may use different factors or combinations of factors to make such a determination. For instance, the route prediction engine may use location information to identify that the device is now located on a road (e.g., I-280 North) and/or that the device is traveling at a speed associated with motorized vehicle travel.
The second stage 210 shows that once the device's route prediction engine identifies the aunt's house 240 as a possible destination and then identifies a route from the device to this possible destination, the map application changes the display presentation 215 to the display presentation 225. Like the presentation 215, the presentation 225 has two parts. The first part 230 displays a route between the device's current location 235 and the aunt's house 240. In some embodiments, this part also displays a UI affordance (e.g., a selectable item) 245 for initiating a vehicle navigation presentation so that the map application can provide turn-by-turn navigation instructions.
The second part 250 of the display presentation 225 provides the identity of the destination, some other data regarding this destination (e.g., the frequency of travel) and an ETA for the trip. Like the first display presentation 215, the second display presentation is rather non-intrusive because this presentation is not meant to distract the user as the user has not affirmatively requested a route to be identified or navigated.
The example illustrated in
The first stage 305 shows the map application as providing a first presentation 322 of a first predicted route 320 to the house of the user's mother 326 and some information 325 about the predicted destination and the expected ETA. The second stage illustrates a second presentation 324 that is similar to the first presentation except that the user is shown to have reached an intersection 330 along the predicted route. As shown in the left part of the second presentation as well as the predicted route in the right part of the second presentation, the map application during the second stage is still predicting that the home of the user's mom is the eventual destination or the trip.
The third stage 315 shows that instead of turning right along the intersection to continue on the route to the mom's house, the user has taken a left turn towards the home of the user's father 328. Upon this turn, the map application provides a third presentation 329 that displays a second predicted route 335 to the father's house along with information 350 about the predicted destination and the expected ETA.
In many cases, the device's route prediction engine might concurrently identify multiple possible destination and routes to these destinations. In these situations, the prediction engine ranks each predicted destination or route based on a factor that quantifies the likelihood that they are the actual destination or route. This ranking can then be used to determine which destination or route is processed by the output engine that receives the predictions.
In both of the above examples, the predicted destinations are destination particular to the user (i.e., for other devices belonging to other people, the destination address predicted in stages 305 and 310 would not be “Mom's House”). In some embodiments, the device's route prediction engine uses stored contact information (e.g., an address book entry for “Mom” listing a physical address) combined with route history and motion history indicating that the particular physical address is a frequent destination in order to identify that the user is likely heading to “Mom's House”.
In the second stage 410, the user approaches an exit that is close to a coffee shop 430 that the user frequently attends. Accordingly, as the user reaches this exit, the prediction engine identifies the coffee shop 430 as a likely destination and predicts the route 435 to the coffee shop (or sends the predicted destination to a route generation server in order to for the server to generate and return the route 435). Once the map application receives these predictions, the application provides the second presentation 440 that shows the coffee shop 430 and the predicted route 435 to the coffee shop.
As the user reaches the exit 422, the route prediction engine identifies two other possible destinations for the user. Accordingly, the third stage 415 shows that as the user moves closer to the exit, three small circles 445 are added to the bottom of the presentation 450 of the map application. These three circles connote the existence of three predicted destinations/routes. The third stage 415 also shows the user performing a swipe operation on the presentation to navigate to another of the predicted destinations/routes. The user can perform such an action because in some embodiments the display (e.g., of the device or of the vehicle), which is displaying the presentation, has a touch sensitive screen. In addition to swipe gestures, some embodiments may accept other gestures, or selection of various affordances (e.g., left and right or up and down navigation arrows) in order to cycle through the different options.
If the presentation is being shown on a non-touch sensitive screen of a vehicle, the user can navigate to the next predicted destination/route through one of the keys, knobs, or other controls of the vehicle. While the previous
Regardless of how the user navigates to the next predicted destination/route, the map application presents the next predicted destination/route upon receiving the user's input. The fourth stage 420 of
In some embodiments, the map application uses the predicted destination/route to dynamically define and update other parts of its functionality in addition to or instead of its route guidance.
In each stage, the map application display a “recents” window 535 that opens when the search field 540 is selected. This window is meant to provide suggestions for possible destinations to a user. When the map application does not have a predicted destination, the recents window displays initially pre-specified destinations, such as the user's home and the user's work, as shown in the first stage 505. This stage corresponds to a start of a trip 520. At this time, the prediction engine has not identified a predicted destination. In addition to displaying the pre-specified destinations, some embodiments may additionally display for selection recently entered locations obtained from recent tasks performed on the device or on another device by the same user. For instance, the recents locations may include a location of a restaurant for which the user recently searched in a web browser, the address of a contact that the user recently contacted (e.g., via e-mail, message, phone call, etc.), the location of a device of a contact that the user recently contacted (if the user has permission to acquire that information), a source location of a recent route to the device's current location, etc.
The second stage 510 shows that at a later position 525 along the trip, the route prediction engine identifies two possible destinations, which are the Hamburger Palace and the Pot Sticker Delight. The prediction engine at this stage provides these two possible destinations to the map application, having assigned the Hamburger Palace a higher probability of being the actual destination. Accordingly, in the second stage 510, the map application replaces in the recents window the default Home and Work destinations with the Hamburger Palace and Pot Sticker Delight destinations as these have been assigned higher probabilities of being the eventual destination than the default choices (which may also have been assigned some small but non-zero probability of being the user's destination). Based on the assignment of a higher probability to Hamburger Palace as the eventual destination, the map application displays the Hamburger Palace higher than the Pot Sticker Delight on this page.
However, after the user passes an intersection 550 shown in the third position 530 along the route, the prediction engine (which regularly recalculates probabilities for possible destinations, in some embodiments) determines that the Pot Sticker Delight restaurant now has a higher probability than Hamburger Palace of being the eventual destination. The engine notifies the map application of this change, and in response, the map application swaps the order of these two choices in the recents window 560. In some embodiments, the prediction engine sends a list of possible destinations and their probabilities to the map application (e.g., a particular number of destinations, or all destinations above a particular probability) on a regular basis. In other embodiments, the map application sends a request to the prediction engine for a given number of possible destinations with the highest probabilities in a particular order.
In addition to the map application shown in the preceding four figures, many other applications operating on the mobile device can be clients for the predictions made by this device's route prediction engine. For instance, as illustrated by
The notification center display 605 of some embodiments includes a traffic tab 610 that, when selected, illustrates information about the traffic along predicted and/or in progress routes for the user.
On the first day, the user manually pulls on the notification center affordance 650 in the first stage 615. As shown by the second stage 617, this operation results in the presentation of the notification center display 605a. This display presents the enabled notification feature of this display (as indicated by the greyed out color of the notification affordance 625) and also presents a variety of notification alerts from a variety of applications. The second stage 617 also shows the user's touch selection of the traffic affordance 610.
As shown by the third stage 619, the selection of this affordance results in the presentation 605b of the traffic window, which states that traffic along I-280 north is typical for the predicted time of the user's departure. The expression of traffic as typical or atypical is highly useful because certain routes are always congested. Accordingly, a statement that the route is congested might not help the user. Rather, knowing that the traffic is better than usual, worse than usual, or the same as usual is more useful for the user.
In some embodiments, the notification services provide such normative expressions of traffic because the route prediction engine (1) predicts likely routes that the user might take at different time periods based on the user's historical travel patterns, and (2) compares the traffic along these routes to historical traffic levels along these routes. This engine then provides not only one or more predicted routes for the user but also normative quantification of the level of traffic along each of the predicted routes. When the engine provides more than one predicted route, the engine also provides probabilities for each predicted route that quantifies the likelihood that the predicted route is the actual route. Based on these probabilities, the notification manager can display traffic information about the most likely route, or creates a stacked, sorted display of such traffic information, much like the sorted, stacked display of routes explained above by reference to
On the second day, the user again manually pulls on the notification center affordance 650 in the first stage 621. As shown by the second stage 623, this operation again results in the presentation of the notification center display 605c. This display presents the enabled notification feature of this display (as indicated by the greyed out color of the notification affordance 625) and also presents a variety of notification alerts from a variety of applications. The second stage 623 also shows the user's touch selection of the traffic affordance 610.
As shown by the third stage 627, the selection of this affordance again results in a presentation 605d of the traffic window. In this case, the window states that traffic along I-280 north is worse than usual for the predicted time of the user's departure and that the user should consider leaving a little earlier than usual. The notification services provides this notice because the route prediction engine (1) has predicted that the user will likely take I-280 north, and (2) has compared today's traffic with historical traffic levels along this route to determine that traffic today is worse than usual.
While specific names are given to the two tabs of the notification center (“Notifications” and “Traffic”), one of ordinary skill in the art will recognize that different names or icons may be used to represent these tabs. For instance, the “Traffic” tab is called the “Today” tab in some embodiments. Similarly, other specific UI names and icons may be represented differently in different embodiments.
The example in this figure is described in terms of four stages of operations of the device. The first stage 705 shows the user selecting the notification manager icon on a page 707 of the device 700. In some embodiments, the notification manager does not have an icon on the page 707, but rather is made available through a setting menu that is presented when a setting icon on page 707 or another page of the device's UI.
The second stage 710 shows the presentation of several notification controls, one of which is the traffic alert affordance 712. This affordance allows the traffic alert service of the notification manager to be turned on or off. The second stage shows the user turning on the traffic alert service, while the third stage 715 shows the notification page after this service has been turned on. The third stage 715 also shows the user turning off the screen of the device by pressing on a screen-off button 732 of the device.
The fourth stage 720 is a duration of time after the user has turned off the screen of the device. During this duration, the device's route prediction engine has identified that the user will likely take a predicted route and has also determined that this route is congested so much that the user will not likely make a 10 am meeting at a particular location that is indicated in the user's calendar. In this case, the prediction engine may have generated the predicted route based on the information in the user's calendar as well as the time of day and historical travel data for that time of day.
The prediction engine relays this information to the notification manager of the device, and in response, the notification manger generates a traffic alert prompt 745 that is illustrated in the fourth stage. This prompt notifies the user that traffic along the user's predicted route is worse than usual and that the user might wish to consider leaving 20 minutes earlier so that the user can make his 10 am meeting. By utilizing the calendar information, the device is able to identify the traffic congestion and alert the user while the user still has time to leave early for the meeting. When the user is on route and the device determines that the user will not be able to arrive at the meeting on time, the device can notify the user that he will be late, and enable the user to notify others participating in the meeting.
Instead of such a traffic alert, the notification manager in other embodiments provides other types of alerts, such as the normative ones worse than usual, better than usual) described above. Also, while the example illustrated in
In addition to the route prediction engine 105, the map manager 810 and the notification manager 815, the device 800 also includes a location identification engine 110, a route history data storage (e.g., database) 825, a location data storage 830, motion data storage 835, a data storage filter 840, an address parser 845, email and message processors 850 and 855, an external map service manager 860 and a network interface 865. This device includes a number of other modules (e.g., a system clock) that are not shown here in order to not obscure the description of this device with unnecessary detail.
The data storages 825, 830 and 835 store user-specific travel data that the route prediction engine 105 uses (1) to formulate predictions about current or future destinations and/or routes to such destinations for the device's user, and (2) to provide this information to the notification manager 815 and the map manager 810 in order to relay relevant information to the user. The user-specific data is different in different embodiments. In some embodiments, the destination data storage 825 stores data about previous destinations traveled to by the user, and addresses parsed from recent e-mails and/or messages sent to the user. The address parser 845 (1) examines new emails and messages received from the email and message processors 850 and 855 to identify and parse addresses in these emails and messages, (2) for each extracted address, determines whether the address is stored in the destination storage 825 already, and (3) stores each new address (that was not previously stored in the storage 825) in the destination storage 825. In some embodiments, the address parser 845 uses known techniques to identify and parse addresses in these messages (i.e., the techniques that enable devices to make addresses selectable within an e-mail or message).
The location data storage 830 in some embodiments stores locations along routes that the device previously traveled (e.g., GPS coordinates of the device at intervals of time along a route). The motion data storage 835 in some embodiments stores travel speeds of the device along previously traveled routes. In some embodiments, this includes the travel speeds for one or more locations stored in the location data storage 830 (e.g., the speed at each of the GPS coordinates, or speed traveled between two coordinates). The filter 840 periodically examines the destination, location and motion data stored in the storages 825, 830 and 835, and removes any data that is “stale.” In some embodiments, the filter's criterion for staleness is expressed in terms of the age of the data and the frequency of its use in predicting new destinations and/or new routes. Also, in some embodiments, the filter uses different criteria for measuring the staleness of different types of data. For instance, some embodiments filter parsed address data (provided by the address parser 845) that is older than a certain number of days (e.g., one day), while filtering destination, location and motion data related to previous routes of the device based on the age of the data and its frequency of use.
Together, the route prediction engine uses the destination, location and motion data stored in the data storages 825, 830 and 835, along with additional inputs such as the system clock and location identification engine 110, to identify predicted destinations and routes between these destinations. The route prediction engine 105 periodically performs automated processes that formulate predictions about current or future destinations of the device and/or formulate routes to such destinations for the device. Based on these formulations, this engine then directs the notification manager 815 and the map manager 810 to relay relevant information to the user.
The prediction engine 105 includes a prediction processor 807, a predicted destination generator 870, a machine learning engine 875 and a predicted route generator 880. The prediction processor 807 is responsible for initiating the periodic automated processing of the engine, and serves as the central unit for coordinating much of the operations of the prediction engine.
In different embodiments, the prediction processor 807 initiates the automated processes with different frequencies. For instance, to identify possible patterns of travel, it runs these processes once a day in some embodiment, several times a day in other embodiments, several times an hour in yet other embodiments, and several times a minute in still other embodiments. In some embodiments, the prediction processor 807 initiates the automated process on a particular schedule (e.g., once per day) and additionally initiates the process when something changes (e.g., when a user adds an address to a calendar event or when something is added to a history). In some embodiments, the prediction processor 807 runs less frequently when the device is running on battery power than when it is plugged in to an external power source. In addition, some embodiments perform the automated prediction processes more frequently upon detection that the device is traveling along a road or at a speed above a particular threshold generally associated with motor vehicles (e.g., 20 mph, 30 mph, etc.). Furthermore, some embodiments allow the user of the device to configure how often or how aggressively the route prediction engine should perform its automated processes.
Whenever the prediction processor 807 initiates a route prediction operation, it directs the predicted destination generator 870 to generate a complete list of possible destinations based on previous route destinations and parsed new address locations stored in the destination data storage 825. In some embodiments, the prediction destination generator uses the machine-learning engine 875 to create a more complete set of possible destinations for a particular location of the device (as specified by the location identification engine 110) for a particular time interval on a particular day. Various machine-learning engines can be used to perform such a task. In general, machine-learning engines can formulate a solution to a problem expressed in terms of a set of known input variables and a set of unknown output variables by taking previously solved problems (expressed in terms of known input and output variables) and extrapolating values for the unknown output variables of the current problem. In the case of generating a more complete set of possible destinations for a particular time interval on a particular day, the machine-learning engine is used to filter the stored route destinations and augment this set based on addresses parsed by the parser 845.
In some embodiments, the machine-learning engine 875 not only facilitates the formulation of predicted future destinations, but it also facilitates the formulation of predicted routes to destinations based on the destination, location, and motion histories that are stored in the data storages 825, 830 and 835. For instance, in some embodiments, the prediction route generator 880 uses the machine-learning engine to create sets of associated destinations, with each set specified as a possible route for traveling. In some embodiments, each set of associated destinations includes start and end locations (which are typically called route destinations or endpoints), a number of locations in between the start and end locations, and a number of motion records specifying rate of travel (e.g., between the locations). In some embodiments, the predicted route generator uses the machine-learning engine to stitch together previously unrelated destination, location, and motion records into contiguous sets that specify potential routes. In some embodiments, the destination history includes both addresses that are received as explicit addresses (e.g., addresses input by the user or received in an e-mail) as well as additional destinations derived from previous locations in the location history of the device e the location data storage 830).
In some embodiments, the location history is used to predict both destinations and routes. By identifying locations at which the device stopped traveling (or at least stopped traveling at motor vehicle speeds), and subsequently stayed within locations associated with a single address for at least a threshold amount of time, the predicted destination generator can extrapolate possible destinations from the location data 830. However, this use of the motion history may occasionally lead to false positives for destinations (e.g., when the user is stuck in a major traffic jam). Accordingly, some embodiments also identify (through an interface with a vehicle to which the device connects) that the vehicle has been turned off and/or that subsequent motion is at low speed (in terms of change of location coordinates) follows a movement (e.g., based on accelerometer data) associated with leaving the interior of a vehicle (e.g., small-scale movement associated with walking). Also, in some embodiments, the route generator through the prediction processor 807 provides each of the sets of records that it generates to an external routing engine outside of the device so that the external routing service can generate the route for each generated set of associated records. To communicate with the external routing engine, the prediction processor 807 uses the external map service manager 860, which through the network interface 865 and a network (e.g., a wireless network) communicatively couples to the device and the external map service. One such approach will further described below by reference to
Once the route generator 880 has one or more specified predicted routes, the prediction processor 807 supplies the generated set of predicted routes and/or metadata information about these routes to the notification manager 815 and the map manager 810 for relay of one or more relevant pieces of route information to the user of the device. For instance, based on the predicted destination/route data supplied by the device's route prediction engine 105, the map manager 810 performs the map output functions described above by reference to
In the example illustrated in
The computer 910 is associated with the device 800, e.g., belongs to the same cloud synchronization account as the device 800. This computer includes an address parser 920 with similar functionality to the address parser 845 of
The computer 910 also has client service manager 930 and network interface 935 that push any newly stored locations to an address data storage 940 in the server set 905 for the account associated with the device 800 and the computer 910. In some embodiments, the server set 905 communicatively couples to the device 800 and the computer 910 through the Internet 970. Accordingly, in these embodiments, the servers have one or more web servers 937 that facilitate communications between the back-end servers and the device or the computer.
The servers 905 include a cloud service manager 965 that coordinates all communication that it receives from the device 800 and the computer 910. It also has an address data storage 940 that stores addresses parsed by the parsers 845 and 920 as well as locations searched in the device and the computer. In some embodiments, the storage 940 also stores previous destinations and/or previous locations traveled by the user of the device. In other embodiments, however, information about the user's previous destinations and/or locations are not stored in the address data storage in order to respect and maintain the privacy of the user. To the extent that addresses are stored in the storage 940 in some embodiments, the storage 940 stores the addresses in an encrypted manner so that only keys residing on the device or the desktop can decrypt and gain access to such address data.
After storing any newly parsed address that it receives in the data storage 940, the cloud service manager 965 directs a destination publisher 955 to publish this address to any device or computer associated with the same synchronization account as the computer 910. Accordingly, this newly parsed data will get pushed by the publisher to the destination storage 825 through the web server 937, the Internet, the network interface 865 and the external map service manager 860. Once the data is in this storage 825, the route prediction engine 105 can use it to formulate its predicted destinations.
In some embodiments, the servers have one or more filters for filtering out stale data in the address data storage 940. The servers also include a route generator 960, which is the engine used by the device's route prediction engine 105 to generate one or more routes for each set of destination, location and/or moving record that the engine 105 provides to it for route generation. In some embodiments, each module of the server set 905 that is shown as a single block might be implemented by one or more computers dedicated to the particular task of the module. Alternatively, or conjunctively, two or more modules of the server set might execute on the same computer. Furthermore, the different functionalities performed by the set of servers might be performed in disparate geographic locations (e.g., a set of servers at one location for the route generation with the address database stored at a different location). In some embodiments, the route generation function might be performed by servers belonging to a first organization, while a second organization stores the addresses in its cloud storage, with communication between the two organization either performed directly or through the device 800 as an intermediary.
In addition to parsing entails and messages to identify possible destination locations, the device of some embodiments examines other secondary sources to identify other potential destinations. Examples of such other candidate destinations include locations of calendared events in the user's calendar, locations of events for which the user has electronic tickets, and locations associated with reminders.
Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more computational or processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, random access memory (RAM) chips, hard drives, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
The mapping and navigation applications of some embodiments operate on mobile devices, such as smart phones (e.g., iPhones®) and tablets (e.g., iPads®).
The peripherals interface 1115 is coupled to various sensors and subsystems, including a camera subsystem 1120, a wireless communication subsystem(s) 1125, an audio subsystem 1130, an I/O subsystem 1135, etc. The peripherals interface 1115 enables communication between the processing units 1105 and various peripherals. For example, an orientation sensor 1145 (e.g., a gyroscope) and an acceleration sensor 1150 (e.g., an accelerometer) is coupled to the peripherals interface 1115 to facilitate orientation and acceleration functions.
The camera subsystem 1120 is coupled to one or more optical sensors 1140 (e.g., a charged coupled device (CCD) optical sensor, a complementary metal-oxide-semiconductor (CMOS) optical sensor, etc.). The camera subsystem 1120 coupled with the optical sensors 1140 facilitates camera functions, such as image and/or video data capturing. The wireless communication subsystem 1125 serves to facilitate communication functions. In some embodiments, the wireless communication subsystem 1125 includes radio frequency receivers and transmitters, and optical receivers and transmitters (not shown in
The I/O subsystem 1135 involves the transfer between input/output peripheral devices, such as a display, a touch screen, etc., and the data bus of the processing units 1105 through the peripherals interface 1115. The I/O subsystem 1135 includes a touch-screen controller 1155 and other input controllers 1160 to facilitate the transfer between input/output peripheral devices and the data bus of the processing units 1105. As shown, the touch-screen controller 1155 is coupled to a touch screen 1165. The touch-screen controller 1155 detects contact and movement on the touch screen 1165 using any of multiple touch sensitivity technologies. The other input controllers 1160 are coupled to other input/control devices, such as one or more buttons. Some embodiments include a near-touch sensitive screen and a corresponding controller that can detect near-touch interactions instead of or in addition to touch interactions.
The memory interface 1110 is coupled to memory 1170. In some embodiments, the memory 1170 includes volatile memory (e.g., high-speed random access memory), non-volatile memory (e.g., flash memory), a combination of volatile and non-volatile memory, and/or any other type of memory. As illustrated in
The memory 1170 also includes communication instructions 1174 to facilitate communicating with one or more additional devices; graphical user interface instructions 1176 to facilitate graphic user interface processing; image processing instructions 1178 to facilitate image-related processing and functions; input processing instructions 1180 to facilitate input-related (e.g., touch input) processes and functions; audio processing instructions 1182 to facilitate audio-related processes and functions; and camera instructions 1184 to facilitate camera-related processes and functions. The instructions described above are merely exemplary and the memory 1170 includes additional and/or other instructions in some embodiments. For instance, the memory for a smartphone may include phone instructions to facilitate phone-related processes and functions. Additionally, the memory may include instructions for a mapping and navigation application as well as other applications. The above-identified instructions need not be implemented as separate software programs or modules. Various functions of the mobile computing device can be implemented in hardware and/or in software, including in one or more signal processing and/or application specific integrated circuits.
While the components illustrated in
Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such machine-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks. The machine-readable media may store a program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of programs or code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs), customized ASICs or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself. In addition, some embodiments execute software stored in programmable logic devices (PLDs), ROM, or RAM devices.
As used in this specification and any claims of this application, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification and any claims of this application, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
As mentioned above, various embodiments may operate within a map service operating environment.
In some embodiments, a map service is implemented by one or more nodes in a distributed computing system. Each node may be assigned one or more services or components of a map service. Some nodes may be assigned the same map service or component of a map service. A load balancing node in some embodiments distributes access or requests to other nodes within a map service. In some embodiments a map service is implemented as a single system, such as a single server. Different modules or hardware devices within a server may implement one or more of the various services provided by a map service.
A map service in some embodiments provides map services by generating map service data in various formats. In some embodiments, one format of map service data is map image data. Map image data provides image data to a client device so that the client device may process the image data (e.g., rendering and/or displaying the image data as a two-dimensional or three-dimensional map). Map image data, whether in two or three dimensions, may specify one or more map tiles. A map tile may be a portion of a larger map image. Assembling together the map tiles of a map produces the original map. Tiles may be generated from map image data, routing or navigation data, or any other map service data. In some embodiments map tiles are raster-based map tiles, with tile sizes ranging from any size both larger and smaller than a commonly-used 256 pixel by 256 pixel tile. Raster-based map tiles may be encoded in any number of standard digital image representations including, but not limited to, Bitmap (.bmp), Graphics Interchange Format (.gif), Joint Photographic Experts Group (.jpg, .jpeg, etc.), Portable Networks Graphic (.png), or Tagged Image File Format (.tiff). In some embodiments, map tiles are vector-based map tiles, encoded using vector graphics, including, but not limited to, Scalable Vector Graphics (.svg) or a Drawing File (.drw). Some embodiments also include tiles with a combination of vector and raster data. Metadata or other information pertaining to the map tile may also be included within or along with a map tile, providing further map service data to a client device. In various embodiments, a map tile is encoded for transport utilizing various standards and/or protocols, some of which are described in examples below,
In various embodiments, map tiles may be constructed from image data of different resolutions depending on zoom level. For instance, for low zoom level (e.g., world or globe view), the resolution of map or image data need not be as high relative to the resolution at a high zoom level (e.g., city or street level). For example, when in a globe view, there may be no need to render street level artifacts as such objects would be so small as to be negligible in many cases.
A map service in some embodiments performs various techniques to analyze a map tile before encoding the tile for transport. This analysis may optimize map service performance for both client devices and a map service. In some embodiments map tiles are analyzed for complexity, according to vector-based graphic techniques, and constructed utilizing complex and non-complex layers. Map tiles may also be analyzed for common image data or patterns that may be rendered as image textures and constructed by relying on image masks. In some embodiments, raster-based image data in a map tile contains certain mask values, which are associated with one or more textures. Some embodiments also analyze map tiles for specified features that may be associated with certain map styles that contain style identifiers.
Other map services generate map service data relying upon various data formats separate from a map tile in some embodiments. For instance, map services that provide location data may utilize data formats conforming to location service protocols, such as, but not limited to, Radio Resource Location services Protocol (RRLP), TIA 801 for Code Division Multiple Access (CDMA), Radio Resource Control (RRC) position protocol, or LTE Positioning Protocol (LPP). Embodiments may also receive or request data from client devices identifying device capabilities or attributes e.g., hardware specifications or operating system version) or communication capabilities (e.g., device communication bandwidth as determined by wireless signal strength or wire or wireless network type).
A map service may obtain map service data from internal or external sources. For example, satellite imagery used in map image data may be obtained from external services, or internal systems, storage devices, or nodes. Other examples may include, but are not limited to, GPS assistance servers, wireless network coverage databases, business or personal directories, weather data, government information (e.g., construction updates or road name changes or traffic reports. Some embodiments of a map service may update map service data (e.g., wireless network coverage) for analyzing future requests from client devices.
Various embodiments of a map service may respond to client device requests for map services. These requests may be a request for a specific map or portion of a map. Some embodiments format requests for a map as requests for certain map tiles. In some embodiments, requests also supply the map service with starting locations current locations) and destination locations for a route calculation. A client device may also request map service rendering information, such as map textures or style sheets. In at least some embodiments, requests are also one of a series of requests implementing turn-by-turn navigation. Requests for other geographic data may include, but are not limited to, current location, wireless network coverage, weather, traffic information, or nearby points-of-interest.
A map service, in some embodiments, analyzes client device requests to optimize a device or map service operation. For instance, a map service may recognize that the location of a client device is in an area of poor communications (e.g., weak wireless signal) and send more map service data to supply a client device in the event of loss in communication or send instructions to utilize different client hardware (e.g., orientation sensors) or software (e.g., utilize wireless location services or Wi-Fi positioning instead of GPS-based services). In another example, a map service may analyze a client device request for vector-based map image data and determine that raster-based map data better optimizes the map image data according to the image's complexity. Embodiments of other map services may perform similar analysis on client device requests and as such the above examples are not intended to be limiting.
Various embodiments of client devices (e.g., client devices 1202a-1202c) are implemented on different portable-multifunction device types. Client devices 1202a-1202c utilize map service 1230 through various communication methods and protocols. In some embodiments, client devices 1202a-1202c obtain map service data from map service 1230. Client devices 1202a-1202c request or receive map service data. Client devices 1202a-1202c then process map service data (e.g., render and/or display the data) and may send the data to another software or hardware module on the device or to an external device or system.
A client device, according to some embodiments, implements techniques to render and/or display maps. These maps may be requested or received in various formats, such as map tiles described above. A client device may render a map in two-dimensional or three-dimensional views. Some embodiments of a client device display a rendered map and allow a user, system, or device providing input to manipulate a virtual camera in the map, changing the map display according to the virtual camera's position, orientation, and field-of-view. Various forms and input devices are implemented to manipulate a virtual camera. In some embodiments, touch input, through certain single or combination gestures (e.g., touch-and-hold or a swipe) manipulate the virtual camera. Other embodiments allow manipulation of the device's physical location to manipulate a virtual camera. For instance, a client device may be tilted up from its current position to manipulate the virtual camera to rotate up. In another example, a client device may be tilted forward from its current position to move the virtual camera forward. Other input devices to the client device may be implemented including, but not limited to, auditory input (e.g., spoken words), a physical keyboard, mouse, and/or a joystick.
Some embodiments provide various visual feedback to virtual camera manipulations, such as displaying an animation of possible virtual camera manipulations when transitioning from two-dimensional map views to three-dimensional map views. Some embodiments also allow input to select a map feature or object (e.g., a building) and highlight the object, producing a blur effect that maintains the virtual camera's perception of three-dimensional space.
In some embodiments, a client device implements a navigation system (e.g., turn-by-turn navigation). A navigation system provides directions or route information, which may be displayed to a user. Some embodiments of a client device request directions or a route calculation from a map service. A client device may receive map image data and route data from a map service. In some embodiments, a client device implements a turn-by-turn navigation system, which provides real-time route and direction information based upon location information and route information received from a map service and/or other location system, such as Global Positioning Satellite (GPS). A client device may display map image data that reflects the current location of the client device and update the map image data in real-time. A navigation system may provide auditory or visual directions to follow a certain route.
A virtual camera is implemented to manipulate navigation map data according to some embodiments. Some embodiments of client devices allow the device to adjust the virtual camera display orientation to bias toward the route destination. Some embodiments also allow virtual camera to navigation turns simulating the inertial motion of the virtual camera.
Client devices implement various techniques to utilize map service data from map service. Some embodiments implement some techniques to optimize rendering of two-dimensional and three-dimensional map image data. In some embodiments, a client device locally stores rendering information. For instance, a client stores a style sheet which provides rendering directions for image data containing style identifiers. In another example, common image textures may be stored to decrease the amount of map image data transferred from a map service. Client devices in different embodiments implement various modeling techniques to render two-dimensional and three-dimensional map image data, examples of which include, but are not limited to: generating three-dimensional buildings out of two-dimensional building footprint data; modeling two-dimensional and three-dimensional map objects to determine the client device communication environment; generating models to determine whether map labels are seen from a certain virtual camera position; and generating models to smooth transitions between map image data. Some embodiments of client devices also order or prioritize map service data in certain techniques. For instance, a client device detects the motion or velocity of a virtual camera, which if exceeding certain threshold values, lower-detail image data is loaded and rendered of certain areas. Other examples include: rendering vector-based curves as a series of points, preloading map image data for areas of poor communication with a map service, adapting textures based on display zoom level, or rendering map image data according to complexity.
In some embodiments, client devices communicate utilizing various data formats separate from a map tile. For instance, some client devices implement Assisted. Global Positioning Satellites (A-GPS) and communicate with location services that utilize data formats conforming to location service protocols, such as, but not limited to, Radio Resource Location services Protocol (RRLP), TIA 801 for Code Division Multiple Access (CDMA), Radio Resource Control (RRC) position protocol, or LTE Positioning Protocol (LPP). Client devices may also receive GPS signals directly. Embodiments may also send data, with or without solicitation from a map service, identifying the client device's capabilities or attributes (e.g., hardware specifications or operating system version) or communication capabilities (e.g., device communication bandwidth as determined by wireless signal strength or wire or wireless network type).
In some embodiments, both voice and data communications are established over wireless network 1210 and access device 1212. For instance, device 1202a can place and receive phone calls (e.g., using voice over Internet Protocol (VoIP) protocols), send and receive e-mail messages (e.g., using Simple Mail Transfer Protocol (SMTP) or Post Office Protocol 3 (POP3)), and retrieve electronic documents and/or streams, such as web pages, photographs, and videos, over wireless network 1210, gateway 1214, and WAN 1220 (e.g., using Transmission Control Protocol/Internet Protocol (TCP/IP) or User Datagram Protocol (UDP)). Likewise, in some implementations, devices 1202b and 1202c can place and receive phone calls, send and receive e-mail messages, and retrieve electronic documents over access device 1212 and WAN 1220. In various embodiments, any of the illustrated client device may communicate with map service 1230 and/or other service(s) 1250 using a persistent connection established in accordance with one or more security protocols, such as the Secure Sockets Layer (SSL) protocol or the Transport Layer Security (TLS) protocol.
Devices 1202a and 1202b can also establish communications by other means. For example, wireless device 1202a can communicate with other wireless devices (e.g., other devices 1202b, cell phones, etc.) over the wireless network 1210. Likewise devices 1202a and 1202b can establish peer-to-peer communications 1240 (e.g., a personal area network) by use of one or more communication subsystems, such as Bluetooth® communication from Bluetooth Special Interest Group, Inc. of Kirkland, Washington. Device 1202c can also establish peer to peer communications with devices 1202a or 1202b (not shown). Other communication protocols and topologies can also be implemented. Devices 1202a and 1202b may also receive Global Positioning Satellite (GPS) signals from GPS satellites 1260.
Devices 1202a, 1202b, and 1202c can communicate with map service 1230 over the one or more wire and/or wireless networks, 1210 or 1212. For instance, map service 1230 can provide a map service data to rendering devices 1202a, 1202b, and 1202c. Map service 1230 may also communicate with other services 1250 to obtain data to implement map services. Map service 1230 and other services 1250 may also receive GPS signals from GPS satellites 1260.
In various embodiments, map service 1230 and/or other service(s) 1250 are configured to process search requests from any of client devices. Search requests may include but are not limited to queries for business, address, residential locations, points of interest, or some combination thereof. Map service 1230 and/or other service(s) 1250 may be configured to return results related to a variety of parameters including but not limited to a location entered into an address bar or other text entry field (including abbreviations and/or other shorthand notation), a current map view (e.g., user may be viewing one location on the multifunction device while residing in another location), current location of the user (e.g., in cases where the current map view did not include search results), and the current route (if any). In various embodiments, these parameters may affect the composition of the search results (and/or the ordering of the search results) based on different priority weightings. In various embodiments, the search results that are returned may be a subset of results selected based on specific criteria include but not limited to a quantity of times the search result (e.g., a particular point of interest) has been requested, a measure of quality associated with the search result (e.g., highest user or editorial review rating), and/or the volume of reviews for the search results (e.g., the number of times the search result has been review or rated).
In various embodiments, map service 1230 and/or other service(s) 1250 are configured to provide auto-complete search results that are displayed on the client device, such as within the mapping application. For instance, auto-complete search results may populate a portion of the screen as the user enters one or more search keywords on the multifunction device. In some cases, this feature may save the user time as the desired search result may be displayed before the user enters the full search query. In various embodiments, the auto complete search results may be search results found by the client on the client device (e.g., bookmarks or contacts search results found elsewhere (e.g., from the Internet) by map service 1230 and/or other service(s) 1250, and/or some combination thereof. As is the case with commands, any of the search queries may be entered by the user via voice or through typing. The multifunction device may be configured to display search results graphically within any of the map display described herein. For instance, a pin or other graphical indicator may specify locations of search results as points of interest. In various embodiments, responsive to a user selection of one of these points of interest (e.g., a touch selection, such as a tap), the multifunction device is configured to display additional information about the selected point of interest including but not limited to ratings, reviews or review snippets, hours of operation, store status (e.g., open for business, permanently closed, etc.), and/or images of a storefront for the point of interest. In various embodiments, any of this information may be displayed on a graphical information card that is displayed in response to the user's selection of the point of interest.
In various embodiments, map service 1230 and/or other service(s) 1250 provide one or more feedback mechanisms to receive feedback from client devices 1202a-1202c. For instance, client devices may provide feedback on search results to map service 1230 and/or other service(s) 1250 (e.g., feedback specifying ratings, reviews, temporary or permanent business closures, errors etc.); this feedback may be used to update information about points of interest in order to provide more accurate or more up-to-date search results in the future. In some embodiments, map service 1230 and/or other service(s) 1250 may provide testing information to the client device (e.g., an A/B test) to determine which search results are best. For instance, at random intervals, the client device may receive and present two search results to a user and allow the user to indicate the best result. The client device may report the test results to map service 1230 and/or other service(s) 1250 to improve future search results based on the chosen testing technique, such as an A/B test technique in which a baseline control sample is compared to a variety of single-variable test samples in order to improve results.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. For instance, many of the figures illustrate various touch gestures (e.g., taps, double taps, swipe gestures, press and hold gestures, etc.). However, many of the illustrated operations could be performed via different touch gestures (e.g., a swipe instead of a tap, etc.) or by non-touch input (e.g., using a cursor controller, a keyboard, a touchpad/trackpad, a near-touch sensitive screen, etc.). Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
1102037 | Gibson | Jun 1914 | A |
4846836 | Reich | Jul 1989 | A |
4914605 | Loughmiller et al. | Apr 1990 | A |
5289572 | Yano et al. | Feb 1994 | A |
5459667 | Odagaki et al. | Oct 1995 | A |
5654892 | Fujii et al. | Aug 1997 | A |
6101443 | Kato et al. | Aug 2000 | A |
6321158 | Delorme et al. | Nov 2001 | B1 |
6321161 | Herbst et al. | Nov 2001 | B1 |
6322538 | Elbert et al. | Nov 2001 | B1 |
6480783 | Myr | Nov 2002 | B1 |
6615130 | Myr | Sep 2003 | B2 |
6653948 | Kunimatsu et al. | Nov 2003 | B1 |
6710774 | Kawasaki et al. | Mar 2004 | B1 |
6764518 | Godin | Jul 2004 | B2 |
6845776 | Stack et al. | Jan 2005 | B2 |
6847889 | Park et al. | Jan 2005 | B2 |
6862524 | Nagda et al. | Mar 2005 | B1 |
6944539 | Yamada et al. | Sep 2005 | B2 |
6960233 | Berg et al. | Nov 2005 | B1 |
7149625 | Mathews et al. | Dec 2006 | B2 |
7158878 | Rasmussen et al. | Jan 2007 | B2 |
7200394 | Aoki et al. | Apr 2007 | B2 |
7254580 | Gharachorloo et al. | Aug 2007 | B1 |
7274311 | MacLeod | Sep 2007 | B1 |
7289812 | Roberts et al. | Oct 2007 | B1 |
7469827 | Katragadda et al. | Dec 2008 | B2 |
7552009 | Nelson | Jun 2009 | B2 |
7555525 | Malik | Jun 2009 | B2 |
7555725 | Abramson et al. | Jun 2009 | B2 |
7634463 | Katragadda et al. | Dec 2009 | B1 |
7702456 | Singh | Apr 2010 | B2 |
7708684 | Demarais et al. | May 2010 | B2 |
7729854 | Muramatsu | Jun 2010 | B2 |
7831384 | Bill | Nov 2010 | B2 |
7846174 | Baker et al. | Dec 2010 | B2 |
7847686 | Atkins et al. | Dec 2010 | B1 |
7860645 | Kim et al. | Dec 2010 | B2 |
7869941 | Coughlin et al. | Jan 2011 | B2 |
7885761 | Tajima et al. | Feb 2011 | B2 |
7917288 | Cheung et al. | Mar 2011 | B2 |
7925427 | Zehler | Apr 2011 | B2 |
7981162 | Stack et al. | Jul 2011 | B2 |
8020104 | Robarts et al. | Sep 2011 | B2 |
8078397 | Zilka | Dec 2011 | B1 |
8100931 | Baker et al. | Jan 2012 | B2 |
8190326 | Nezu et al. | May 2012 | B2 |
8205157 | Van et al. | Jun 2012 | B2 |
8255830 | Ording et al. | Aug 2012 | B2 |
8346590 | Norton et al. | Jan 2013 | B2 |
8355862 | Matas et al. | Jan 2013 | B2 |
8370736 | Ording et al. | Feb 2013 | B2 |
8423052 | Ueda et al. | Apr 2013 | B2 |
8428871 | Matthews et al. | Apr 2013 | B1 |
8463289 | Shklarski et al. | Jun 2013 | B2 |
8464182 | Blumenberg et al. | Jun 2013 | B2 |
8509816 | Branch et al. | Aug 2013 | B2 |
8510665 | Ording et al. | Aug 2013 | B2 |
8529431 | Baker et al. | Sep 2013 | B2 |
8564544 | Jobs et al. | Oct 2013 | B2 |
8584050 | Ording et al. | Nov 2013 | B2 |
8606516 | Vertelney et al. | Dec 2013 | B2 |
8607167 | Matas et al. | Dec 2013 | B2 |
8639654 | Vervaet et al. | Jan 2014 | B2 |
8694791 | Rohrweck et al. | Apr 2014 | B1 |
8745018 | Singleton et al. | Jun 2014 | B1 |
8756534 | Ording et al. | Jun 2014 | B2 |
8762048 | Kosseifi et al. | Jun 2014 | B2 |
8781716 | Wenneman | Jul 2014 | B1 |
8798918 | Onishi et al. | Aug 2014 | B2 |
8825362 | Kirsch | Sep 2014 | B2 |
8825381 | Tang | Sep 2014 | B2 |
8849564 | Mutoh | Sep 2014 | B2 |
8881060 | Chaudhri et al. | Nov 2014 | B2 |
8881061 | Chaudhri et al. | Nov 2014 | B2 |
8886398 | Kato et al. | Nov 2014 | B2 |
8918736 | Jobs et al. | Dec 2014 | B2 |
8954524 | Hamon | Feb 2015 | B1 |
9043150 | Forstall et al. | May 2015 | B2 |
9060844 | Kagan et al. | Jun 2015 | B2 |
9170122 | Moore et al. | Oct 2015 | B2 |
9200915 | Vulcano et al. | Dec 2015 | B2 |
9223494 | Desalvo et al. | Dec 2015 | B1 |
9317813 | Mcgavran et al. | Apr 2016 | B2 |
9326058 | Tachibana et al. | Apr 2016 | B2 |
9347787 | Moore et al. | May 2016 | B2 |
9500492 | Moore et al. | Nov 2016 | B2 |
9574894 | Karakotsios | Feb 2017 | B1 |
9631930 | Mcgavran et al. | Apr 2017 | B2 |
20010056325 | Pu et al. | Dec 2001 | A1 |
20030040808 | Stack et al. | Feb 2003 | A1 |
20030093117 | Saadat | May 2003 | A1 |
20030156097 | Kakihara et al. | Aug 2003 | A1 |
20030200192 | Bell et al. | Oct 2003 | A1 |
20040009815 | Zotto et al. | Jan 2004 | A1 |
20040070602 | Kobuya et al. | Apr 2004 | A1 |
20040092892 | Kagan et al. | May 2004 | A1 |
20040128066 | Kudo | Jul 2004 | A1 |
20040138761 | Stack et al. | Jul 2004 | A1 |
20040143342 | Stack et al. | Jul 2004 | A1 |
20040158395 | Yamada et al. | Aug 2004 | A1 |
20040160342 | Curley et al. | Aug 2004 | A1 |
20040172141 | Stack et al. | Sep 2004 | A1 |
20040193371 | Koshiji et al. | Sep 2004 | A1 |
20040220682 | Levine et al. | Nov 2004 | A1 |
20040236498 | Le et al. | Nov 2004 | A1 |
20040260457 | Kawase et al. | Dec 2004 | A1 |
20050096673 | Stack et al. | May 2005 | A1 |
20050096750 | Kagan et al. | May 2005 | A1 |
20050125148 | Van et al. | Jun 2005 | A1 |
20050131631 | Nakano et al. | Jun 2005 | A1 |
20050149261 | Lee et al. | Jul 2005 | A9 |
20050177181 | Kagan et al. | Aug 2005 | A1 |
20050192629 | Saadat et al. | Sep 2005 | A1 |
20050247320 | Stack et al. | Nov 2005 | A1 |
20050251324 | Wiener et al. | Nov 2005 | A1 |
20050273251 | Nix et al. | Dec 2005 | A1 |
20050273252 | Nix et al. | Dec 2005 | A1 |
20060004680 | Robarts et al. | Jan 2006 | A1 |
20060015246 | Hui | Jan 2006 | A1 |
20060025925 | Fushiki et al. | Feb 2006 | A1 |
20060041372 | Kubota et al. | Feb 2006 | A1 |
20060074553 | Foo et al. | Apr 2006 | A1 |
20060155375 | Kagan et al. | Jul 2006 | A1 |
20060155431 | Berg et al. | Jul 2006 | A1 |
20060156209 | Matsuura et al. | Jul 2006 | A1 |
20060161440 | Nakayama et al. | Jul 2006 | A1 |
20060173841 | Bill | Aug 2006 | A1 |
20060179277 | Flachs et al. | Aug 2006 | A1 |
20060195257 | Nakamura | Aug 2006 | A1 |
20060206063 | Kagan et al. | Sep 2006 | A1 |
20060252983 | Lembo et al. | Nov 2006 | A1 |
20060264982 | Viola et al. | Nov 2006 | A1 |
20060271287 | Gold et al. | Nov 2006 | A1 |
20060287818 | Okude et al. | Dec 2006 | A1 |
20060293943 | Tischhauser et al. | Dec 2006 | A1 |
20070016362 | Nelson | Jan 2007 | A1 |
20070021914 | Song | Jan 2007 | A1 |
20070060932 | Stack et al. | Mar 2007 | A1 |
20070135990 | Seymour et al. | Jun 2007 | A1 |
20070140187 | Rokusek et al. | Jun 2007 | A1 |
20070166396 | Badylak et al. | Jul 2007 | A1 |
20070185374 | Kick et al. | Aug 2007 | A1 |
20070185938 | Prahlad et al. | Aug 2007 | A1 |
20070185939 | Prahland et al. | Aug 2007 | A1 |
20070200821 | Conradt et al. | Aug 2007 | A1 |
20070208429 | Leahy | Sep 2007 | A1 |
20070208498 | Barker | Sep 2007 | A1 |
20070233162 | Gannoe et al. | Oct 2007 | A1 |
20070233635 | Burfeind et al. | Oct 2007 | A1 |
20070276596 | Solomon et al. | Nov 2007 | A1 |
20070276911 | Bhumkar et al. | Nov 2007 | A1 |
20070293716 | Baker et al. | Dec 2007 | A1 |
20070293958 | Stehle et al. | Dec 2007 | A1 |
20080015523 | Baker | Jan 2008 | A1 |
20080086455 | Meisels et al. | Apr 2008 | A1 |
20080109718 | Narayanaswami | May 2008 | A1 |
20080133282 | Landar et al. | Jun 2008 | A1 |
20080162031 | Okuyama et al. | Jul 2008 | A1 |
20080208356 | Stack et al. | Aug 2008 | A1 |
20080208450 | Katzer | Aug 2008 | A1 |
20080228030 | Godin | Sep 2008 | A1 |
20080228393 | Geelen et al. | Sep 2008 | A1 |
20080238941 | Kinnan et al. | Oct 2008 | A1 |
20080250334 | Price | Oct 2008 | A1 |
20080255678 | Cully et al. | Oct 2008 | A1 |
20080319653 | Moshfeghi | Dec 2008 | A1 |
20090005082 | Forstall et al. | Jan 2009 | A1 |
20090005981 | Forstall et al. | Jan 2009 | A1 |
20090006994 | Forstall et al. | Jan 2009 | A1 |
20090010405 | Toebes | Jan 2009 | A1 |
20090012553 | Swain et al. | Jan 2009 | A1 |
20090016504 | Mantell et al. | Jan 2009 | A1 |
20090037093 | Kurihara et al. | Feb 2009 | A1 |
20090063041 | Hirose et al. | Mar 2009 | A1 |
20090063048 | Tsuji | Mar 2009 | A1 |
20090100037 | Scheibe | Apr 2009 | A1 |
20090143977 | Beletski et al. | Jun 2009 | A1 |
20090156178 | Elsey et al. | Jun 2009 | A1 |
20090157294 | Geelen et al. | Jun 2009 | A1 |
20090157615 | Ross et al. | Jun 2009 | A1 |
20090164110 | Basir | Jun 2009 | A1 |
20090177215 | Stack et al. | Jul 2009 | A1 |
20090182497 | Hagiwara | Jul 2009 | A1 |
20090192702 | Bourne et al. | Jul 2009 | A1 |
20090216434 | Panganiban et al. | Aug 2009 | A1 |
20090240427 | Siereveld et al. | Sep 2009 | A1 |
20090254273 | Gill et al. | Oct 2009 | A1 |
20090284476 | Bull et al. | Nov 2009 | A1 |
20090293011 | Nassar | Nov 2009 | A1 |
20090326803 | Neef et al. | Dec 2009 | A1 |
20100010738 | Cho | Jan 2010 | A1 |
20100045704 | Kim | Feb 2010 | A1 |
20100067631 | Ton | Mar 2010 | A1 |
20100069054 | Labidi et al. | Mar 2010 | A1 |
20100070253 | Hirata et al. | Mar 2010 | A1 |
20100082239 | Hardy et al. | Apr 2010 | A1 |
20100088631 | Schiller | Apr 2010 | A1 |
20100100310 | Eich et al. | Apr 2010 | A1 |
20100153010 | Huang | Jun 2010 | A1 |
20100174790 | Dubs et al. | Jul 2010 | A1 |
20100174998 | Lazarus et al. | Jul 2010 | A1 |
20100179750 | Gum | Jul 2010 | A1 |
20100185382 | Barker et al. | Jul 2010 | A1 |
20100186244 | Schwindt | Jul 2010 | A1 |
20100191454 | Shirai et al. | Jul 2010 | A1 |
20100204914 | Gad et al. | Aug 2010 | A1 |
20100220250 | Vanderwall et al. | Sep 2010 | A1 |
20100248746 | Saavedra et al. | Sep 2010 | A1 |
20100287024 | Ward et al. | Nov 2010 | A1 |
20100293462 | Bull et al. | Nov 2010 | A1 |
20100295803 | Kim et al. | Nov 2010 | A1 |
20100309147 | Fleizach et al. | Dec 2010 | A1 |
20100309149 | Blumenberg et al. | Dec 2010 | A1 |
20100312466 | Katzer et al. | Dec 2010 | A1 |
20100312838 | Lyon et al. | Dec 2010 | A1 |
20100324816 | Highstrom et al. | Dec 2010 | A1 |
20100328100 | Fujiwara et al. | Dec 2010 | A1 |
20110022305 | Okamoto | Jan 2011 | A1 |
20110029237 | Kamalski | Feb 2011 | A1 |
20110039584 | Merrett | Feb 2011 | A1 |
20110077850 | Ushida | Mar 2011 | A1 |
20110082620 | Small et al. | Apr 2011 | A1 |
20110082627 | Small et al. | Apr 2011 | A1 |
20110090078 | Kim et al. | Apr 2011 | A1 |
20110098918 | Siliski et al. | Apr 2011 | A1 |
20110106592 | Stehle et al. | May 2011 | A1 |
20110112750 | Lukassen | May 2011 | A1 |
20110137834 | Ide et al. | Jun 2011 | A1 |
20110143726 | De Silva | Jun 2011 | A1 |
20110145863 | Alsina et al. | Jun 2011 | A1 |
20110153186 | Jakobson | Jun 2011 | A1 |
20110161001 | Fink | Jun 2011 | A1 |
20110167058 | Van Os | Jul 2011 | A1 |
20110170682 | Kale et al. | Jul 2011 | A1 |
20110183627 | Ueda et al. | Jul 2011 | A1 |
20110185390 | Faenger et al. | Jul 2011 | A1 |
20110191516 | Xiong et al. | Aug 2011 | A1 |
20110194028 | Dove et al. | Aug 2011 | A1 |
20110210922 | Griffin | Sep 2011 | A1 |
20110213785 | Kristiansson et al. | Sep 2011 | A1 |
20110227843 | Wang | Sep 2011 | A1 |
20110230178 | Jones et al. | Sep 2011 | A1 |
20110238289 | Lehmann et al. | Sep 2011 | A1 |
20110238297 | Severson | Sep 2011 | A1 |
20110246891 | Schubert et al. | Oct 2011 | A1 |
20110257973 | Chutorash et al. | Oct 2011 | A1 |
20110264234 | Baker et al. | Oct 2011 | A1 |
20110265003 | Schubert et al. | Oct 2011 | A1 |
20110270517 | Benedetti | Nov 2011 | A1 |
20110270600 | Bose et al. | Nov 2011 | A1 |
20110282576 | Cabral et al. | Nov 2011 | A1 |
20110285717 | Schmidt et al. | Nov 2011 | A1 |
20110291860 | Ozaki et al. | Dec 2011 | A1 |
20110291863 | Ozaki et al. | Dec 2011 | A1 |
20110298724 | Ameling et al. | Dec 2011 | A1 |
20110307455 | Gupta et al. | Dec 2011 | A1 |
20120016554 | Huang | Jan 2012 | A1 |
20120035924 | Jitkoff et al. | Feb 2012 | A1 |
20120041674 | Katzer | Feb 2012 | A1 |
20120059812 | Bliss et al. | Mar 2012 | A1 |
20120065814 | Seok | Mar 2012 | A1 |
20120095675 | Tom et al. | Apr 2012 | A1 |
20120109516 | Miyazaki et al. | May 2012 | A1 |
20120143503 | Hirai et al. | Jun 2012 | A1 |
20120143504 | Kalai et al. | Jun 2012 | A1 |
20120155800 | Cottrell et al. | Jun 2012 | A1 |
20120179361 | Mineta et al. | Jul 2012 | A1 |
20120179365 | Miyahara et al. | Jul 2012 | A1 |
20120191343 | Haleem | Jul 2012 | A1 |
20120208559 | Svendsen et al. | Aug 2012 | A1 |
20120253659 | Pu et al. | Oct 2012 | A1 |
20120254804 | Sheha et al. | Oct 2012 | A1 |
20120260188 | Park et al. | Oct 2012 | A1 |
20120265433 | Viola et al. | Oct 2012 | A1 |
20120303263 | Alam et al. | Nov 2012 | A1 |
20120303268 | Su et al. | Nov 2012 | A1 |
20120310882 | Werner et al. | Dec 2012 | A1 |
20120322458 | Shklarski et al. | Dec 2012 | A1 |
20120329520 | Akama | Dec 2012 | A1 |
20130006520 | Dhanani et al. | Jan 2013 | A1 |
20130035853 | Stout et al. | Feb 2013 | A1 |
20130110343 | Ichikawa et al. | May 2013 | A1 |
20130110842 | Donneau-Golencer et al. | May 2013 | A1 |
20130130742 | Dietz et al. | May 2013 | A1 |
20130158855 | Weir et al. | Jun 2013 | A1 |
20130166096 | Jotanovic | Jun 2013 | A1 |
20130173577 | Cheng et al. | Jul 2013 | A1 |
20130190978 | Kato et al. | Jul 2013 | A1 |
20130191020 | Emani et al. | Jul 2013 | A1 |
20130191790 | Kawalkar | Jul 2013 | A1 |
20130238241 | Chelotti et al. | Sep 2013 | A1 |
20130275899 | Schubert et al. | Oct 2013 | A1 |
20130321178 | Jameel et al. | Dec 2013 | A1 |
20130322665 | Bennett et al. | Dec 2013 | A1 |
20130325332 | Rhee et al. | Dec 2013 | A1 |
20130325856 | Soto et al. | Dec 2013 | A1 |
20130326384 | Moore et al. | Dec 2013 | A1 |
20130344899 | Stamm et al. | Dec 2013 | A1 |
20130345961 | Leader et al. | Dec 2013 | A1 |
20130345975 | Vulcano et al. | Dec 2013 | A1 |
20140058659 | Kolling | Feb 2014 | A1 |
20140093100 | Jeong et al. | Apr 2014 | A1 |
20140095066 | Bouillet et al. | Apr 2014 | A1 |
20140122605 | Merom et al. | May 2014 | A1 |
20140123062 | Nguyen | May 2014 | A1 |
20140137219 | Castro et al. | May 2014 | A1 |
20140156262 | Yuen et al. | Jun 2014 | A1 |
20140163882 | Stahl et al. | Jun 2014 | A1 |
20140171129 | Benzatti et al. | Jun 2014 | A1 |
20140207373 | Vedran | Jul 2014 | A1 |
20140236916 | Barrington et al. | Aug 2014 | A1 |
20140277937 | Scholz et al. | Sep 2014 | A1 |
20140278051 | Mcgavran et al. | Sep 2014 | A1 |
20140278070 | Mcgavran et al. | Sep 2014 | A1 |
20140278086 | San et al. | Sep 2014 | A1 |
20140279723 | Mcgavran et al. | Sep 2014 | A1 |
20140281955 | Sprenger | Sep 2014 | A1 |
20140309914 | Scofield et al. | Oct 2014 | A1 |
20140317086 | James | Oct 2014 | A1 |
20140344420 | Rjeili et al. | Nov 2014 | A1 |
20140358437 | Fletcher | Dec 2014 | A1 |
20140358438 | Cerny et al. | Dec 2014 | A1 |
20140364149 | Marti et al. | Dec 2014 | A1 |
20140364150 | Marti et al. | Dec 2014 | A1 |
20140365113 | Mcgavran et al. | Dec 2014 | A1 |
20140365120 | Vulcano et al. | Dec 2014 | A1 |
20140365124 | Vulcano et al. | Dec 2014 | A1 |
20140365125 | Vulcano et al. | Dec 2014 | A1 |
20140365126 | Vulcano et al. | Dec 2014 | A1 |
20140365459 | Clark et al. | Dec 2014 | A1 |
20140365505 | Clark et al. | Dec 2014 | A1 |
20150032366 | Man et al. | Jan 2015 | A1 |
20150066360 | Kirsch | Mar 2015 | A1 |
20150139407 | Maguire et al. | May 2015 | A1 |
20150161267 | Sugawara et al. | Jun 2015 | A1 |
20150177017 | Jones | Jun 2015 | A1 |
20150282230 | Kim | Oct 2015 | A1 |
20150334171 | Baalu et al. | Nov 2015 | A1 |
20160183063 | Kang et al. | Jun 2016 | A1 |
20160212229 | Mcgavran et al. | Jul 2016 | A1 |
20170132713 | Bowne et al. | May 2017 | A1 |
20170176208 | Chung et al. | Jun 2017 | A1 |
20170205243 | Moore et al. | Jul 2017 | A1 |
20170205246 | Koenig et al. | Jul 2017 | A1 |
20170350703 | Mcgavran et al. | Dec 2017 | A1 |
20170358033 | Montoya et al. | Dec 2017 | A1 |
20190025070 | Moore et al. | Jan 2019 | A1 |
20190063940 | Moore et al. | Feb 2019 | A1 |
Number | Date | Country |
---|---|---|
2014235244 | Sep 2015 | AU |
2014235248 | Sep 2015 | AU |
1754147 | Mar 2006 | CN |
1815438 | Aug 2006 | CN |
1900657 | Jan 2007 | CN |
101210824 | Jul 2008 | CN |
101438133 | May 2009 | CN |
101517362 | Aug 2009 | CN |
101582053 | Nov 2009 | CN |
101641568 | Feb 2010 | CN |
201555592 | Aug 2010 | CN |
101858749 | Oct 2010 | CN |
102398600 | Apr 2012 | CN |
102449435 | May 2012 | CN |
102567440 | Jul 2012 | CN |
102607570 | Jul 2012 | CN |
102622233 | Aug 2012 | CN |
102663842 | Sep 2012 | CN |
102713906 | Oct 2012 | CN |
102759359 | Oct 2012 | CN |
102781728 | Nov 2012 | CN |
102826047 | Dec 2012 | CN |
102840866 | Dec 2012 | CN |
102939515 | Feb 2013 | CN |
112013002794 | Apr 2015 | DE |
1063494 | Dec 2000 | EP |
1102037 | May 2001 | EP |
1944724 | Jul 2008 | EP |
1995564 | Nov 2008 | EP |
2355467 | Aug 2011 | EP |
2369299 | Sep 2011 | EP |
2946172 | Nov 2011 | EP |
2479538 | Jul 2012 | EP |
2546104 | Jan 2013 | EP |
2617604 | Jul 2013 | EP |
2672225 | Dec 2013 | EP |
2672226 | Dec 2013 | EP |
2698968 | Feb 2014 | EP |
2778614 | Sep 2014 | EP |
2778615 | Sep 2014 | EP |
2712152 | Sep 2016 | EP |
3101392 | Dec 2016 | EP |
2002-365080 | Dec 2002 | JP |
2003-207340 | Jul 2003 | JP |
2005-004527 | Jan 2005 | JP |
2005-031068 | Feb 2005 | JP |
2005-198345 | Jul 2005 | JP |
2005-228020 | Aug 2005 | JP |
3957062 | Aug 2007 | JP |
2009-098781 | May 2009 | JP |
2010-261803 | Nov 2010 | JP |
2013-508725 | Mar 2013 | JP |
10-1034426 | May 2011 | KR |
10-2012-0069778 | Jun 2012 | KR |
200811422 | Mar 2008 | TW |
200949281 | Dec 2009 | TW |
201017110 | May 2010 | TW |
M389063 | Sep 2010 | TW |
201202079 | Jan 2012 | TW |
201216667 | Apr 2012 | TW |
2005015425 | Feb 2005 | WO |
2005094257 | Oct 2005 | WO |
2008079891 | Jul 2008 | WO |
2008101048 | Aug 2008 | WO |
2009073806 | Jun 2009 | WO |
2009143876 | Dec 2009 | WO |
2010040405 | Apr 2010 | WO |
2011076989 | Jun 2011 | WO |
2011146141 | Nov 2011 | WO |
2011160044 | Dec 2011 | WO |
2012034581 | Mar 2012 | WO |
2012036279 | Mar 2012 | WO |
2012127768 | Sep 2012 | WO |
2012141294 | Oct 2012 | WO |
2012164333 | Dec 2012 | WO |
2013173511 | Nov 2013 | WO |
2013184348 | Dec 2013 | WO |
2013184444 | Dec 2013 | WO |
2013184449 | Dec 2013 | WO |
2014145127 | Sep 2014 | WO |
2014145134 | Sep 2014 | WO |
2014145145 | Sep 2014 | WO |
2014151151 | Sep 2014 | WO |
2014151152 | Sep 2014 | WO |
2014151153 | Sep 2014 | WO |
2014151155 | Sep 2014 | WO |
2014197115 | Dec 2014 | WO |
2014197155 | Dec 2014 | WO |
Entry |
---|
Feng, Tao, et al. “Continuous mobile authentication using touchscreen gestures.” 2012 IEEE conference on technologies for homeland security (HST). IEEE, 2012. (Year: 2012). |
Blandford Rafe, “Nokia's terminal Mode (car integration) progressing”, www.allaboutsymbian.com, Jul. 15, 2021 (2-10-07-15) 1-10 pages. |
Liu Jianghong, “Analysis of the impact of in-vehicle information system on driving safety and improvement measures”, Dec. 31, 2011, vol. 37, No. 4. |
Simonds C., “Software for the next-generation automobile”, IT Professional, Nov. 1, 2003, 5 Pages. |
Sun Jiaping, “Design and implementation of in-vehicle navigation system”, Outstanding Master's Thesis in China Full Text Database Engineering Science and Technology II Series, Feb. 15, 2010. |
Ashbrook et al., “Using GPS to Learn Significant Locations and Preduct Movement Across Multiple Users”, College of Computing, Atlanta, GA, May 15, 2003, 15 pages. |
Author Unknown, “Android 2.3.4 User's Guide”, May 20, 2011, pp. 1-384, Google, Inc. |
Author Unknown, “Blaupunkt chooses NNG navigation software for new aftermarket product,” May 24, 2011, 2 pages, available at http://telematicsnews.info/2011/05/24/blaupunktchooses-nnq-navigation-software-for-new-aftermarket-product my224 1 /. |
Author Unknown, “Garmin. nuvi 1100/1200/1300/1400 series owner's manual,” Jan. 2011, 72 pages, Garmin Corporation, No. 68, Jangshu 2nd Road, Sijhih, Taipei County, Taiwan. |
Author Unknown, “Google Maps Voice Navigation in Singapore,” software2tech, Jul. 20, 2011, 1 page, available at http://www.youtube.com/watch?v=7B9JN7BkvME. |
Author Unknown, “Hands-on with Sony Mirrorlink head units,” Jan. 13, 2012, 1 page, uudethuong, available at http://www.youtube.com/watch?v=UMkF478_Ax0. |
Author Unknown, “Introducing Mimics—Control your iPhone from a Touch Screen,” May 13, 2011, 3 pages, mp3Car.com, available at http://www.youtube.com/watch?v=YcggnNVTNwl. |
Author Unknown, “iPhone Integration on Mercedes Head Unit,” Dec. 3, 2011, 1 page, Motuslab, available at http://www.youtube.com/watch?v=rXy6lpQAtDo. |
Author Unknown, “Magellan (Registered) Road Mate (Registered) GPS Receiver: 9020/9055 user Manual,” Month Unknown, 2010, 48 pages, MiTAC International Corporation, CA, USA. |
Author Unknown, “Magellan (Registered) Road Mate (Registered): 2010 North America Application User Manual,” Month Unknown, 2009, 24 pages, MiTAC Digital Corporation, CA, USA. |
Author Unknown, “Magellan Maestro 3100 User Manual,” Month Unknown 2007, 4 pages, Magellan Navigation, Inc., San Dimas, USA. |
Author Unknown, “Mazda: Navigation System—Owner's Manual”, available at http://download.tomtom.com/open/manuals/mazda/nva-sd8110/Full_Manual_EN.pdf, Jan. 1, 2009, 159 pages. |
Author Unknown, “Touch & Go' Owner's Manual,” Jul. 2011, 218 pages, Toyota, United Kingdom. |
Braun, Elmar, and Max Muhlhauser. “Automatically generating user interfaces for device federations.” Seventh IEEE International Symposium on Multimedia (ISM'05). IEEE, 2005. (Year: 2005). |
Chumkamon, Sakmongkon, et al., “A Blind Navigation System Using RFID for Indoor Environments,” Proceedings of ECTI-CON 2008, May 14-17, 2008, pp. 765-768, IEEE, Krabi, Thailand. |
Diewald, Stefan, et al., “Mobile Device Integration and Interaction in the Automotive Domain”, Autonui:Automotive Natural User Interfaces Workshop at the 3rd International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUl '11), Nov. 29-Dec. 2, 2011, 4 pages, XP002732023, Salzburg, Austria. |
Dube, Ryan, “Use Google Maps Navigation For Turn-By-Turn GPS [Android]”, available at http://www.makeuseof.com/tag/google-maps-navigation-turnbyturn-gps-android/, Jun. 24, 2010, 7 Pages. |
Fischer, Philipp, and Andreas Nurnberger. “Adaptive and multimodal interaction in the vehicle.” 2008 IEEE International Conference on Systems, Man and Cybernetics. IEEE, 2008. (Year: 2008). |
Hightower et al., “Learning and Recognizing the Places We Go”, Intel Researc, Seattle, WA, Jul. 15, 2005, 18 pages. |
Hueger, Fabian. “Platform independent applications for in-vehicle infotainment systems via integration of CE devices.” 2012 IEEE Second International Conference on Consumer Electronics-Berlin (ICCE-Berlin). IEEE, 2012. (Year: 2012). |
Human Motion Prediction for Indoor Mobile Relay Networks Archibald, C.; Ying Zhang; Juan Liu High Performance Computing and Communications (HPCC), 2011 IEEE 13th International Conference on Year: 2011 pp. 989-994, DOI: 10.1109/HPCC.2011.145 Referenced in: IEEE Conference Publications. |
Kim, Sangho, Kosuke Sekiyama, and Toshio Fukuda. “User-adaptive interface with reconfigurable keypad for in-vehicle information systems.” 2008 International Symposium on Micro-NanoMechatronics and Human Science. IEEE, 2008. (Year: 2008). |
Lawrence, Steve, “Review: Sygic Mobile Maps 2009,” Jul. 23, 2009, 4 pages, available at http://www.iphonewzealand.co.nz/2009/all/review-sygic-mobile-maps-2009/. |
Mohan et al., “Adapting Multimedia Internet Content For Universal Access,” IEEE Transactions on Multimedia, Mar. 1999, pp. 104-114. |
Moren (“Google unveils free turn-by-turn directions for Android devices,” Macworld, http://www.macworld.eom/article/1143547/android_turnbyturn.htmll7/Aug. 2016 9:58:56 AM], Oct. 28, 2009). |
Prabhala, Bhaskar, et al., “Next Place Predictions Based on User Mobility Traces,” 2015 IEEE Conference on Computer Communications Workshops (Infocom Wkshps), Apr. 26-May 1, 2015, pp. 93-94, IEEE, Hana Kana, China. |
Ridhawi, I. Al, et al., “A Location-Aware User Tracking and Prediction System,” 2009 Global Information Infrastructure Symposium, Jun. 23-26, 2009, pp. 1-8, IEEE, Hammamet, Tunisia. |
Ruhs, Chris, “My Favorite Android Apps: Maps,” Jun. 24, 2011, 1 page, available at http://www.youtube.com/watch?v=v2aRkLkLT3s. |
Toth, Balint, and Geza Nemeth. “Challenges of creating multimodal interfaces on mobile devices.” ELMAR 2007. IEEE, 2007. (Year: 2007). |
Number | Date | Country | |
---|---|---|---|
20200160223 A1 | May 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15085994 | Mar 2016 | US |
Child | 16747698 | US | |
Parent | 13843796 | Mar 2013 | US |
Child | 15085994 | US |