Live branded dynamic mapping

Information

  • Patent Grant
  • 9829339
  • Patent Number
    9,829,339
  • Date Filed
    Wednesday, May 17, 2017
    7 years ago
  • Date Issued
    Tuesday, November 28, 2017
    7 years ago
Abstract
A live dynamic map that provides for increased convenience for a user at a venue is disclosed. The live dynamic map may be branded for a venue, shows points of interest and paths between locations, includes a messaging capability, and allows users to be social with one another as well as venue management. Live branded mapping may allow for similar engagement on a region-by-region, neighborhood-by-neighborhood, or even brand-by-brand basis. By engaging on a hyper-local level, the present mapping platform can better target user and payload delivering and improve upon business to consumer brand engagement.
Description
BACKGROUND
Field of the Invention

The present invention is generally related to web services. More specifically, the present invention relates to live dynamic mapping and branding including hyper-local marketing.


Description of the Related Art

Entertainment venues such as theme parks, cruise ships, universities, arenas, resorts, and stadiums are a popular family attractions that host thousands of people. Most venues hosting these events provide static paper maps or signs that allow guests to explore the venue, encourage engagement in one or more activities at the venue, and otherwise attempt to maximize enjoyment while on the premises. The venues often have special events such as concerts, merchandise, culinary, or souvenir sales, and other limited time or new events that are often of interest to their visitors. It is difficult, if not impossible, to track and communicate with visitors concerning these special events when they are only provided with a paper map upon entrance into such an event. Similar challenges exists for visitors to communicate amongst themselves, especially concerning their past, present, and intended future location and plans such as when and where to meet with one another.


There is a need in the art for improved customer communications. Such an improvement is needed such that venues might the overall user experience, better engage with and service customers, track customer needs, and ultimately improve monetization from the user presence at the venue.


SUMMARY OF THE PRESENTLY CLAIMED INVENTION

A first claimed embodiment of the present invention include a method for providing a map on a display. Through this method, a graphical image of a venue map is shown on mobile device. The map includes graphics that are not to scale and have latitude and longitude information associated with multiple points on the map. Visual updates of the user are provided on the map as the user navigates through a venue. Personalized messages are provided to a user based on user data collected while the user is in the venue.


A further embodiment includes a device for providing a map. The device includes a display, memory, and a processor. The processor executes instructions stored in memory. Through execution of the instructions, a graphical image of a venue map is displayed and that includes graphics that are not to scale. The map includes latitude and longitude information associated with multiple points on the map. Visual updates of the user are provided on the map and personalized messages are delivered to the user.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates a system for increasing customer engagement, including customer monetization, including live, dynamic mapping that utilizes branding, including hyper-local marketing.



FIG. 2 illustrates a conceptual view of a live dynamic map.



FIG. 3 illustrates an exemplary computing system that may be utilized to implement one or more embodiments of the present invention.





DETAILED DESCRIPTION

The present invention includes a live dynamic map that provides for increased convenience for a user at a venue. Mobile and web-based clients allow application users to experience the live dynamic map. The live dynamic map may be branded for a venue, show points of interest and paths between locations, include a messaging capability, and allow users to be social with one another as well as venue management. The live dynamic map is a tool that may provide live analytics, assist with monetization, and is personalized for each user.


Live branded mapping may allow for similar engagement on a region-by-region, neighborhood-by-neighborhood, or even brand-by-brand basis. For example, a live branded mapping platform could be implemented not only in a theme park, but on a university campus. The platform could likewise be implemented in the context of a neighbor such as San Francisco's Mission District or San Diego's North Park neighborhood. By engaging on a hyper-local level (a small geographically defined community), the present mapping platform can better target user and payload delivering and improve upon business to consumer brand engagement.



FIG. 1 illustrates a system for increasing customer engagement, including customer monetization, including live, dynamic mapping that utilizes branding, including hyper-local marketing. The system 100 of FIG. 1 includes an ecosystem of data sources 105 such as mobile devices 110, point-of-sale (POS) or point-of-entry/-exit (POE) terminals 115, and databases 120. Communicatively coupled to data sources 105 are back-end application servers 125. In system 100, application servers 125 can ingest, normalize and process data collected from mobile devices 110 and various POS or POE terminals 115. Types of information gathered from data sources 105 and processed by back-end application servers 125 are generally inclusive of identity (e.g., user profiles, CRM data, entitlements, demographics, reservation systems and social media sources like Pintrest and Facebook), proximity (e.g., GPS and beacons), and time (e.g., schedules, weather, and queue length).


Mobile devices 110 can execute an application on a user mobile device that shares customer engagement data such as current and prior physical locale within a venue as well as wait times and travel times (e.g., how long was a customer at a particular point in a venue and how long did it take the customer to travel to a further point in a venue), paths to certain point on the map, and other information. Mobile devices 110 are inclusive of wearable devices. Wearable devices (or ‘wearables’) are any type of mobile electronic device that can be worn on the body or attached to or embedded in clothes and accessories of an individual. Processors and sensors associated with a wearable can gather, process, display, and transmit and receive information.


POS data may be gathered at a sales terminal 115 that may interact with a mobile or wearable device 110 to track customer purchase history at a venue or preference for engagement at a particular locale within the venue. POE terminals 115 may provide data related to venue traffic flow, including entry and exit data that can be inclusive of time and volume. POE terminals 115 may likewise interact with mobile and wearable devices 110.


Historical data may also be accessed at databases 120 as a part of the application server 125 processing operation. The results of a processing or normalization operation may likewise be stored for later access and use. Processing and normalization results may also be delivered to front-end applications (and corresponding application servers) that allow for the deployment of contextual experiences and provide a network of services to remote devices as is further described herein.


The present system 100 may be used with and communicate with any number of external front-end devices 135 by way of communications network 130. Communication network 130 may be a local, proprietary network (e.g., an intranet) and/or may be a part of a larger wide-area network. Communication network 130 may include a variety of connected computing device that provide one or more elements of a network-based service. The communications network 130 may include actual server hardware or virtual hardware simulated by software running on one or more actual machines thereby allowing for software controlled scaling in a cloud environment.


Communication network 130 allows for communication between data sources 105 and front-end devices 135 via any number of various communication paths or channels that collectively make up network 130. Such paths and channels may operate utilizing any number of standards or protocols including TCP/IP, 802.11, Bluetooth, GSM, GPRS, 4G, and LTE. Communications network 130 may be a local area network (LAN) that can be communicatively coupled to a wide area network (WAN) such as the Internet operating through one or more network service provider.


Information received and provided over communications network 130 may come from other information systems such as the global positioning system (GPS), cellular service providers, or third-party service providers such as social networks. The system 100 can measure location and proximity using hardware on a user device (e.g., GPS) or collect the data from fixed hardware and infrastructure such as Wi-Fi positioning systems and Radio Frequency ID (RFID) readers. An exemplary location and proximity implementation may include a Bluetooth low-energy beacon with real time proximity detection that can be correlated to latitude/longitude measurements for fixed beacon locations.


Additional use cases may include phone-based, GPS, real-time location (latitude/longitude) measurements, phone geo-fence-real time notifications when a device is moving into or out of location regions, Wi-Fi positioning involving user location detection based on Wi-Fi signal strength (both active or passive), RFID/Near Field Communication (NFC), and cellular tower positioning involving wide range detection of user device location, which may occur at the metro-level.


Front-end devices 135 are inclusive of kiosks, mobile devices, wearable devices, venue devices, captive portals, digital signs, and POS and POE devices. It should be noted that each of these external devices may be used to gather information about one or more consumers at a particular location during a particular time. Thus, a device that is providing information to a customer on the front-end (i.e., a front-end device 135) such as a mobile device executing an application or a specially designed wearable can also function as a data source 105 as described above.


The system 100 of FIG. 1 provides services to connect venue management with visitors and entertainment consumers while simultaneously providing a messaging platform for consumers. For example, the social network of a consumer may be extended into a map and the physical world associated with the map. Services to extend the social network of a user include finding friends, coordinating rally points, management of proximity based parental controls, serendipitous discovery, and customization and sharing of photos. Venue management may provision consumers with badges, points and rewards, coordinate scavenger hunts and competitions, and provide leaderboard and trivia services. Consumers may also be engaged by collecting feedback and reviews of their experiences, managing favorites and wish lists, conducting surveys and interactive voting, and through the display of messages.



FIG. 2 illustrates a conceptual view of a live dynamic map 200. A live branded map 200 like that shown in FIG. 2 may be presented to a user through a device such as a mobile device, tablet, wearable, or other device executing an application on the particular device. The application may communicate with one or more back-end application servers over a network as illustrated in the system 100 of FIG. 1.


The live dynamic map 200 of FIG. 2 includes conceptual layers of a branded layer 210, points of interest 220, way finding 230, messaging 240, social features 250, live analytics 260, monetization 270, and personalization 280.


The map is branded in the spirit of the venue, which may be hyper-local in nature such as a neighborhood or even brand-based. The branded base layer 210 of the map may be derived from a graphical map provided by the venue. For example, for a venue such as a theme park, the branded base layer 210 of the live branded map 200 may be an artistic map showing the park attractions that is typically provided to guests as they enter the theme park. Branded base layer 210 in a hyper-local initiative such as a neighborhood may illustrate a street or series of streets making up the neighborhood.


Such a map could be two-dimensional and show street information with corresponding information related to venues in that neighborhood. The branded base layer 210 could also be three-dimensional and illustrates physical features of the neighborhood and buildings located therein. Traffic flow information could likewise be illustrated for the neighborhood or area much in the same way that a venue like a theme park might illustrate wait times for rides or attractions. For example, a neighborhood-based base layer 210 could allow for literal traffic information for streets or wait times at various popular venues. This information may likewise be integrated with points of interest 220, way finding 230, and analytics 260 as described herein.


Live dynamic map 200 may also include points of interest 220. The points of interest may include any point on the map that may be of interest to a guest of the venue, such as ride, restaurant, bathroom, or other point. Information related to the point of interest 220 may be provided such as the nature of the point of interest, services or goods provided at the point of interest as well as hours, costs, reviews, specials and deals, or wait times. Specific brand related information may also be conveyed at a point of interest or as a point of interest in and of itself. Point of interest 220 data may be introduced either natively or through any third-party service operating in conjunction with or co-branding/sponsoring map 200.


In this regard, map 200 could be revised in real-time to reflect different sponsor or brand information. Such live updates would in turn affect various points of interest 220 that may be related to a particular brand and could even affect the underlying brand layer 210 as a whole. Sponsorship and hyper-local branding initiatives may likewise affect other layers of map 200 in real time or near-real time subject to updates and network connectivity.


Live branded map 200 may integrate a way finding component 230 to allow a user to see where on the map the user is currently located and how to get to other points on the map. These points may include points of interest identified in points of interest layer 220 and described above. Way finding may utilize various location based services as described in the context of FIG. 1.


The live dynamic map 200 may be mapped to the physical world using latitude and longitudinal matching with certain points in the map 200. Often times, the artistic map is not to scale, is disproportionate, and has differing scales at different parts of the map. Markers are used to set local rules regarding how longitude and latitude should map to a point on the artistic map of the venue. The markers may be placed at features and locations in the artistic map that have distortion in scale.


The rules associated with these markers will control how the map behaves in that area. For example, a particular rules may affect how the map 200 behaves with representing user movement within the map 200 at that location. The marker location and disposition affects rule complexity. The same marker/rule logic determines user location and path identification and location.


In some instances, the platform uses a position strategy to convert latitude and longitude location information into map x/y position information. The strategy can be different for maps of different venues. Additionally, one venue may have multiple maps, and one or more maps for a particular venue may have a different strategy than one or more other maps for that venue. One strategy utilizes a linear transformation to convert latitude and longitude location to map x/y position. The linear transformation uses a series of markers to establish the conversion rules for a particular map. Markers are fixed positions that map latitude and longitude location to x/y positions on the map.


The simplest linear transformation strategy may use the closest markers. For example, this may include use of the closest three markers. The latitude and longitude location of those marker locations may be calculated using conversion rules. The transformation rules may be managed using the following linear transformation:

X(1,2,3)=A*a+B*b+E,
Y(1,2,3)=C*a+D*b+F,

where a is the latitude and b is the longitude for any given location. The constants A-F are computed using the latitude, longitude and x, y values from markers 1, 2, and 3:

A=(b1*(x3−x2)+b2*(x1−x3)+b3*(x2−x1)),/(a1*(b2−b3)+a2*(b3−b1)+a3*(b1−b2)),
B=(x2−x1+A*(a1−a2))/(b2−b1),
C=(b1*(y3−y2)+b2*(y1−y3)+b3*(y2−y1)),/(a1*(b2−b3)+a2*(b3−b1)+a3*(b1−b2)),
D=(y2−y1+C*(a1−a2))/(b2−b1),
E=x1−A*a1−B*b1,
F=y1−C*a1−D*b1.


A more complex strategy may use information from the four closest markers, blending the x/y position computed the three closest markers 1, 2, 3 with the position computed using the two closest and next-closest marker 1, 2, 4, weighted by their relative distance from the target location:

X=X(1,2,3)*w3+X(1,2,4)*w4,
Y=Y(1,2,3)*w3+Y(1,2,4)*w4,

where w3 and w4 are the relative weightings for markers 1, 2, 3 and 1, 2, 4, respectively.


They are computed using d3 and d4, the distances to marker 3 and 4, respectively:

w3=1/d3/(1/d3+1/d4),
w4=1/d4/(1/d3+1/d4).


Another strategy uses a similar pattern, using information from the five closest markers:

X=X(1,2,3)*w3+X(1,2,4)*w4+X(1,2,5)*w5,
Y=Y(1,2,3)*w3+Y(1,2,4)*w4+Y(1,2,5)*w5,
where:
w3=1/d3/(1/d3+1/d4+1/d5),
w4=1/d4/(1/d3+1/d4+1/d5),
w5=1/d5/(1/d3+1/d4+1/d5).


In some instances, the way finding features may include providing a recommended path to a user between two points. The recommended path may be determined by up-to-date conditions of the venue, including crowds, obstacles, construction, and points of interest along the way.


The messaging feature 240 of the live branded map 200 may include perishable and visual communication features to communicate information about limited-time offers at various points of interest, which may correlate to user location, the occurrence of events at various locales within a venue or hyper-local market, or to allow user-to-user communication. The social capability of the live dynamic map may provide an overlaying social graph 250 onto the map. For example, the live branded map 200 may indicate to a user if any contacts from a third party networking service are present at the venue. The messaging may include serendipitous discovery in which contacts of the user through a third party service are shown as available at the venue, and interests of the user and present at the venue are communicated to the user. The messaging feature 240 may be integrated with the social graph 250 component to allow for contextual flash mob that occur when a certain condition is met, for example if fifty people gather at the event, by a particular time and at a particular location. Various offers and rewards may result from such occurrences.


The live dynamic map 200 of FIG. 2 may include live analytics 260 capabilities. The analytics may monitor user location within the venue, activity (seeing a show), movement and other parameters associated with the user. The analytics may be used to determine what promotions, marketing, offers, messages and other content is communicated to a user via the monetization layer 270 or through the likes of messaging features 240. Live analytics may also include or rely upon providing the location and status of hardware sensors placed within the venue or hyper-local market such as a neighborhood. Sensors presented on the map may include Bluetooth low-energy beacons, RFID/NFC readers, Wi-Fi access points, and other sensors utilized to collect data on user location, proximity, and access control.


The personalization layer 280 ensures that each live map is personalized for a particular user. The personalization layer 280 personalizes a live map 200 by pushing custom itineraries to each user, providing a “guide” to a user based on data collected about the user within the venue, and other information provided to the user based on data associated with the user.



FIG. 3 illustrates an exemplary computing system that may be utilized to implement one or more embodiments of the present invention. System 300 of FIG. 3, or portions thereof, may be implemented in the likes of client computers, application servers, web servers, mobile devices, wearable devices, and other computing devices. The computing system 300 of FIG. 3 includes one or more processors 310 and main memory 320. Main memory 320 stores, in part, instructions and data for execution by processor 310. Main memory 320 can store the executable code when in operation. The system 300 of FIG. 3 further includes a mass storage device 330, portable storage medium drive(s) 340, output devices 350, user input devices 360, a graphics display 370, and peripheral device ports 380.


While the components shown in FIG. 3 are depicted as being connected via a single bus 390, they may be connected through one or more internal data transport means. For example, processor 310 and main memory 320 may be connected via a local microprocessor bus while mass storage device 330, peripheral device port(s) 380, portable storage device 340, and display system 370 may be connected via one or more input/output (I/O) buses.


Mass storage device 330, which could be implemented with a magnetic disk drive or an optical disk drive, is a non-volatile storage device for storing data and instructions for use by processor 310. Mass storage device 330 can store software for implementing embodiments of the present invention, including the live branded map described in the context of FIG. 2.


Portable storage medium drive(s) 340 operates in conjunction with a portable non-volatile storage medium such as a flash drive or portable hard drive to input and output data and corresponding executable code to system 300 of FIG. 3. Like mass storage device 330, software for implementing embodiments of the present invention (e.g., the live branded map of FIG. 2) may be stored on a portable medium and input to the system 300 via said portable storage.


Input devices 360 provide a portion of a user interface. Input devices 360 may include an alpha-numeric keypad, such as a keyboard, for inputting alpha-numeric and other information, or a pointing device, such as a mouse. Input device 360 may likewise encompass a touchscreen display, microphone, and other input devices including virtual reality (VR) components. System 300 likewise includes output devices 350, which may include speakers or ports for displays, or other monitor devices. Input devices 360 and output devices 350 may also include network interfaces that allow for access to cellular, Wi-Fi, Bluetooth, or other hard-wired networks.


Display system 370 may include a liquid crystal display (LCD), LED display, touch screen display, or other suitable display device. Display system 370 receives textual and graphical information, and processes the information for output to the display device. In some instances, display system 370 may be integrated with or a part of input device 360 and output device 350 (e.g., a touchscreen). Peripheral ports 380 may include any type of computer support device to add additional functionality to the computer system. For example, peripheral device(s) 380 may include a modem or a router or other network communications implementation (e.g., a MiFi hotspot device).


The components illustrated in FIG. 3 are those typically found in computer systems that may be suitable for use with embodiments of the present invention. In this regard, system 300 represents a broad category of such computer components that are well known in the art. System 300 of FIG. 3 can be a personal computer, hand held computing device, smart phone, tablet computer, mobile computing device, wearable, workstation, server, minicomputer, mainframe computer, or any other computing device.


System 300 can include different bus configurations, network platforms, processor configurations, and operating systems, including but not limited to Unix, Linux, Windows, iOS, Palm OS, and Android OS. System 300 may also include components such as antennas, microphones, cameras, position and location detecting devices, and other components typically found on mobile devices. An antenna may include one or more antennas for communicating wirelessly with another device. An antenna may be used, for example, to communicate wirelessly via Wi-Fi, Bluetooth, with a cellular network, or with other wireless protocols and systems. The one or more antennas may be controlled by a processor, which may include a controller, to transmit and receive wireless signals. For example, processor execute programs stored in memory to control antenna transmit a wireless signal to a cellular network and receive a wireless signal from a cellular network. A microphone may include one or more microphone devices which transmit captured acoustic signals to processor and memory. The acoustic signals may be processed to transmit over a network via antenna.


The foregoing detailed description of the technology herein has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the technology to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. The described embodiments were chosen in order to best explain the principles of the technology and its practical application to thereby enable others skilled in the art to best utilize the technology in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the technology be defined by the claims appended hereto.

Claims
  • 1. A method for personalized mapping, comprising: generating a dynamic conversion mapping based on a plurality of fixed-position markers such that the dynamic conversion mapping allows conversion of location data between a graphical coordinate grid of a graphical map and a latitude-longitude location grid, wherein the graphical map includes a plurality of regions of which at least one region maps onto the latitude-longitude location grid at a different scale than at other regions of the plurality of regions, and wherein each fixed-position marker of the plurality of fixed-position markers has a known latitude-longitude marker location on the latitude-longitude location grid and a known graphical marker location on the graphical map;receiving a user locator dataset identifying a location of a user mobile device, wherein the location of the user mobile device is identified via a Global Positioning System (GPS) receiver associated with the user mobile device;generating a personalized map identifying the location of the user mobile device within the graphical map;transmitting personalized map data to the user mobile device, thereby displaying the personalized map at the user mobile device; andtransmitting a personalized message data to the user mobile device, thereby displaying a personalized message at the user mobile device.
  • 2. The method of claim 1, wherein the user locator dataset identifies the location of the user mobile device according to the latitude-longitude location grid, and further wherein generating the personalized map includes using the generated dynamic conversion mapping to convert the location of the user mobile device from the latitude-longitude location grid to the graphical coordinate grid of the graphical map.
  • 3. The method of claim 1, wherein the user locator dataset identifies the location of the user mobile device according to a graphical user location on the graphical map.
  • 4. The method of claim 1, wherein the user locator dataset is received from the user mobile device.
  • 5. The method of claim 1, wherein the user locator dataset is received from a venue device in communication with the user mobile device, wherein the venue device is at least one of a point-of-entry device, a point-of-exit device, or a point-of-sale device.
  • 6. The method of claim 1, wherein the user locator dataset is received from one or more sensors including at least one of a Bluetooth low-energy beacon, a radio-frequency identification reader, a near-field-communication reader, or a Wi-Fi access point.
  • 7. The method of claim 1, wherein the personalized message is a suggested route illustrated within the personalized map.
  • 8. The method of claim 7, wherein the suggested route is dynamically updated based on a condition, the condition being at least one of the location of the user mobile device, a crowd, an obstacle, a construction area, one or more points of interest, a wait time, or street traffic.
  • 9. The method of claim 1, further comprising: receiving an updated user locator dataset identifying an updated location of the user mobile device; andupdating the personalized map so that the personal map identifies the updated location of the user mobile device.
  • 10. The method of claim 9, wherein the personalized map that is displayed at the user mobile device is updated to identify the updated location of the user mobile device in near-real-time.
  • 11. The method of claim 1, wherein the personalized message includes at least one of a marketing message, a promotion, or an offer.
  • 12. The method of claim 1, wherein the personalized map identifies one or more locations of one or more sensors, the one or more sensors including at least one of a Bluetooth low-energy beacon, a radio-frequency identification reader, a near-field-communication reader, or a Wi-Fi access point.
  • 13. The method of claim 1, wherein the personalized map identifies one or more points of interest, the one or more points of interest including at least one of a ride, an attraction, a restaurant, a bathroom, or a street.
  • 14. The method of claim 1, wherein the personalized map identifies one or more contact locations, where each contact location of the one or more contact locations identifies a location of a contact of a user associated with the user mobile device.
  • 15. The method of claim 1, wherein the graphical map includes a graphical depiction of a venue, the venue being at least one of a theme park, a cruise ship, a university campus, an arena, a resort, a stadium, an entertainment venue, or a neighborhood.
  • 16. A system for personalized mapping, the system comprising: a memory storing instructions; anda processor, wherein execution of the instructions by the processor causes the processor to: generate a dynamic conversion mapping that allows conversion of location data between a graphical coordinate grid of a graphical map and a latitude-longitude location grid, wherein the graphical map includes a plurality of regions of which at least one region maps onto the latitude-longitude location grid at a different scale than at other regions of the plurality of regions, and wherein each fixed-position marker of a plurality of fixed-position markers has a known latitude-longitude marker location on the latitude-longitude location grid and a known graphical marker location on the graphical map,receive a user locator dataset identifying a location of a user mobile device, wherein the location of the user mobile device is identified via a Global Positioning System (GPS) receiver associated with the user mobile device,generate a personalized map identifying the location of the user mobile device within the graphical map,transmit personalized map data to the user mobile device, thereby displaying the personalized map at the user mobile device, andtransmit a personalized message data to the user mobile device, thereby displaying a personalized message at the user mobile device.
  • 17. The system of claim 16, further comprising the user mobile device, the user mobile device executing a software application that causes the user mobile device to display the personalized map.
  • 18. The system of claim 16, further comprising a point of entry-exit device, wherein execution of the instructions by the processor causes the processor to receive the location of the user mobile device from the point of entry-exit device.
  • 19. The system of claim 16, wherein the location of the user mobile device is identified further via one or more sensors, the one or more sensors including at least one of a Bluetooth low-energy beacon, a radio-frequency identification reader, a near-field-communication reader, or a Wi-Fi access point.
  • 20. A non-transitory computer-readable storage medium, having embodied thereon a program executable by a processor to perform a method for personalized mapping, the method comprising: obtaining a latitude-longitude user location of a user mobile device via a Global Positioning System (GPS) receiver associated with the user mobile device, the latitude-longitude user location based on a latitude-longitude location grid;transmitting the latitude-longitude user location to an application server, thereby causing the application server to generate a personalized map identifying a graphical user location of the user mobile device within a graphical map, the personalized map generated using a dynamic conversion mapping that converts the latitude-longitude user location into the graphical user location based on a graphical coordinate grid of the graphical map, wherein the graphical map includes a plurality of regions of which at least one region maps onto the latitude-longitude location grid at a different scale than at other regions of the plurality of regions, and wherein the dynamic conversion mapping is based on a plurality of fixed-position markers such that each fixed-position marker of the plurality of fixed-position markers has a known latitude-longitude marker location on the latitude-longitude location grid and a known graphical marker location on the graphical map;receiving the generated personalized map from the application server;receiving a personalized message from the application server; anddisplaying the personalized map and the personalized message.
CROSS REFERENCE TO RELATED APPLICATIONS

The present application is a divisional application of, and claims the priority benefit of U.S. patent application Ser. No. 15/271,087 filed Sep. 20, 2016, which is a continuation application of, and claims the priority benefit of U.S. patent application Ser. No. 14/632,872 filed Feb. 26, 2015, now issued U.S. Pat. No. 9,448,085, which claims the priority benefit of U.S. provisional application No. 61/945,049, filed Feb. 26, 2014, the disclosures of which are incorporated herein by reference.

US Referenced Citations (135)
Number Name Date Kind
4873513 Soults Oct 1989 A
5978744 McBride Nov 1999 A
6142368 Mullins et al. Nov 2000 A
6223559 Coleman May 2001 B1
6320496 Sokoler et al. Nov 2001 B1
6352205 Mullins et al. Mar 2002 B1
6414635 Stewart et al. Jul 2002 B1
6474557 Mullins et al. Nov 2002 B2
6493630 Ruiz et al. Dec 2002 B2
6587787 Yokota Jul 2003 B1
6663006 Mullins et al. Dec 2003 B2
6687608 Sugimoto et al. Feb 2004 B2
6997380 Safael et al. Feb 2006 B2
7222080 Hale et al. May 2007 B2
7558678 Jones Jul 2009 B2
7992773 Rothschild Aug 2011 B1
8368695 Howell Feb 2013 B2
8424752 Rothschild Apr 2013 B2
8427510 Towfiq Apr 2013 B1
8433342 Boyle et al. Apr 2013 B1
8625796 Ben Ayed Jan 2014 B1
8651369 Rothschild Feb 2014 B2
8936190 Rothschild Jan 2015 B2
9485322 Krishnaswamy et al. Nov 2016 B2
9488085 Crawford et al. Nov 2016 B2
9741022 Ziskind Aug 2017 B2
20020029226 Li Mar 2002 A1
20020055863 Behaylo May 2002 A1
20030007464 Balani Jan 2003 A1
20040224703 Takaki et al. Nov 2004 A1
20060074550 Freer et al. Apr 2006 A1
20060087474 Do et al. Apr 2006 A1
20060106850 Morgan et al. May 2006 A1
20070032269 Shostak Feb 2007 A1
20070174115 Chieu et al. Jul 2007 A1
20070197247 Inselberg Aug 2007 A1
20070270166 Hampel et al. Nov 2007 A1
20080059889 Parker Mar 2008 A1
20080183582 Major Jul 2008 A1
20080186164 Emigh Aug 2008 A1
20080290182 Bell et al. Nov 2008 A1
20080306826 Kramer et al. Dec 2008 A1
20090017798 Pop Jan 2009 A1
20090027418 Maru Jan 2009 A1
20090089131 Moukas et al. Apr 2009 A1
20090265428 Light et al. Oct 2009 A1
20090319306 Chanick Dec 2009 A1
20100037141 Carter et al. Feb 2010 A1
20100042320 Salmre et al. Feb 2010 A1
20100077036 DeLuca et al. Mar 2010 A1
20100161432 Kumanov et al. Jun 2010 A1
20100194784 Hoff et al. Aug 2010 A1
20110054976 Adler et al. Mar 2011 A1
20110078026 Durham Mar 2011 A1
20110090123 Sridhara et al. Apr 2011 A1
20110136507 Hauser et al. Jun 2011 A1
20110173545 Meola Jul 2011 A1
20110221745 Goldman et al. Sep 2011 A1
20110246148 Gupta et al. Oct 2011 A1
20110267369 Olsen et al. Nov 2011 A1
20120024947 Naelon Feb 2012 A1
20120069131 Abelow Mar 2012 A1
20120081250 Farrokhi et al. Apr 2012 A1
20120096490 Barnes Apr 2012 A1
20120166960 Salles Jun 2012 A1
20120274642 Ofek Nov 2012 A1
20120284117 Karandikar Nov 2012 A1
20130024265 Lotzof Jan 2013 A1
20130036455 Bodi et al. Feb 2013 A1
20130052990 Zhang Feb 2013 A1
20130059603 Guenec et al. Mar 2013 A1
20130085834 Witherspoon et al. Apr 2013 A1
20130132230 Gibson et al. May 2013 A1
20130137464 Kramer et al. May 2013 A1
20130157655 Smith et al. Jun 2013 A1
20130158867 Sidhu et al. Jun 2013 A1
20130173377 Keller et al. Jul 2013 A1
20130191213 Beck et al. Jul 2013 A1
20130225282 Williams et al. Aug 2013 A1
20130231135 Garskof Sep 2013 A1
20130267260 Chao et al. Oct 2013 A1
20130281084 Batada et al. Oct 2013 A1
20130317944 Huang et al. Nov 2013 A1
20130339073 Dabbiere Dec 2013 A1
20140025466 Bortolin et al. Jan 2014 A1
20140073363 Tidd et al. Mar 2014 A1
20140082509 Roumeliotis et al. Mar 2014 A1
20140118113 Kaushik et al. May 2014 A1
20140122040 Marti May 2014 A1
20140128103 Joao et al. May 2014 A1
20140129266 Perl et al. May 2014 A1
20140162693 Wachter et al. Jun 2014 A1
20140164761 Kufluk et al. Jun 2014 A1
20140188614 Badenhop Jul 2014 A1
20140207509 Yu et al. Jul 2014 A1
20140228060 Abhyanker Aug 2014 A1
20140244332 Mermelstein Aug 2014 A1
20140256357 Wang et al. Sep 2014 A1
20140257991 Christensen et al. Sep 2014 A1
20140278054 Tidd et al. Sep 2014 A1
20140292481 Dumas et al. Oct 2014 A1
20140342760 Moldavsky et al. Nov 2014 A1
20150035644 June et al. Feb 2015 A1
20150038171 Uilecan et al. Feb 2015 A1
20150052460 Mohammad Mirzaei et al. Feb 2015 A1
20150058133 Roth et al. Feb 2015 A1
20150080014 Ben-Yosef et al. Mar 2015 A1
20150100398 Narayanaswami et al. Apr 2015 A1
20150127445 Jaffee May 2015 A1
20150176997 Pursche et al. Jun 2015 A1
20150181384 Mayor et al. Jun 2015 A1
20150222935 King et al. Aug 2015 A1
20150233715 Xu et al. Aug 2015 A1
20150237473 Koepke Aug 2015 A1
20150241238 Bass Aug 2015 A1
20150242890 Bass Aug 2015 A1
20150244725 Ziskind Aug 2015 A1
20150262086 Mader et al. Sep 2015 A1
20150262216 Aziz et al. Sep 2015 A1
20150296347 Roth Oct 2015 A1
20150334569 Rangarajan et al. Nov 2015 A1
20150334676 Hart et al. Nov 2015 A1
20160005003 Norris et al. Jan 2016 A1
20160050526 Liu et al. Feb 2016 A1
20160063537 Kumar Mar 2016 A1
20160105644 Smith et al. Apr 2016 A1
20160127351 Smith et al. May 2016 A1
20160150370 Gillespie et al. May 2016 A1
20160242010 Parulski et al. Aug 2016 A1
20160316324 Sahadi Oct 2016 A1
20160321548 Ziskind Nov 2016 A1
20160323708 Sahadi Nov 2016 A1
20170010119 Bass Jan 2017 A1
20170011348 Ziskind Jan 2017 A1
20170162006 Sahadi Jun 2017 A1
Foreign Referenced Citations (8)
Number Date Country
WO 2011159811 Dec 2011 WO
WO 2013163444 Oct 2013 WO
WO 2015017442 Feb 2015 WO
WO 2015130969 Sep 2015 WO
WO 2015130971 Sep 2015 WO
WO 2016172731 Oct 2016 WO
WO 2016176506 Nov 2016 WO
WO 2016179098 Nov 2016 WO
Non-Patent Literature Citations (20)
Entry
Feng et al., Yue ; “Effective venue image retrieval using robust feature extraction and model constrained matching for mobile robot localization”, Machine Vision and Applications, DOI 10.1007/s00138-011-0350-z, Oct. 28, 2010.
Krueger, Robert; Thom, Dennis; Ertl, Thomas; “Visual Analysis of Movement Behavior using Web Data for Context Enrichment” Institute for Visualization and Interactive Systems (VIS), Published in Pacific Visualization Symposium (PacificVis), 2014 IEEE, pp. 193-200.IEEE, 2014.
Sim, Robert; Dudek, Gregory; “Effective Exploration Strategies for the Construction of Visual Maps”, Centre for Intelligent Machines, Published in: Intelligent Robots and Systems, 2003. (IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on (vol. 4) Date of Conference: Oct. 27-31, 2003.
PCT Application No. PCT/US2004/12667, International Search Report dated Oct. 29, 2004.
PCT Application No. PCT/US2015/017827, International Search Report and Written Opinion dated Jun. 11, 2015.
PCT Application No. PCT/US2015/017829, International Search Report and Written Opinion dated Jun. 8, 2015.
PCT Application No. PCT/US2016/029260, International Search Report and Written Opinion dated Jul. 27, 2016.
PCT Application No. PCT/US2016/029880, International Search Report and Written Opinion dated Jul. 27, 2016.
PCT Application No. PCT/US2016/030424, International Search Report and Written Opinion dated Jul. 29, 2016.
PCT Application No. PCT/US2016/067582, International Search Report and Written Opinion dated Mar. 17, 2017.
U.S. Appl. No. 14/632,872 Office Action dated Mar. 7, 2016.
U.S. Appl. No. 15/271,087 Office Action dated Jun. 7, 2017.
U.S. Appl. No. 14/632,884 Office Action dated May 19, 2017.
U.S. Appl. No. 14/633,015 Office Action dated Apr. 13, 2017.
U.S. Appl. No. 14/633,019 Final Office Action dated Nov. 10, 2016
U.S. Appl. No. 14/633,019 Office Action dated May 6, 2016.
U.S. Appl. No. 15/138,157 Office Action dated Mar. 9, 2017.
U.S. Appl. No. 15/144,359 Office Action dated Apr. 5, 2017.
U.S. Appl. No. 15/383,710 Office Action dated Aug. 16, 2017.
U.S. Appl. No. 15/683,620, Benjamin H. Ziskind, Parental Controls, filed Aug. 22, 2017.
Related Publications (1)
Number Date Country
20170248438 A1 Aug 2017 US
Provisional Applications (1)
Number Date Country
61945049 Feb 2014 US
Divisions (1)
Number Date Country
Parent 15271087 Sep 2016 US
Child 15597609 US
Continuations (1)
Number Date Country
Parent 14632872 Feb 2015 US
Child 15271087 US