The disclosure relates to systems and methods for augmenting a view of a physical space at a geographically definable location with social media and/or other content originating from the geographically definable location.
The availability of content such as videos, audio files, photos, text, and/or other content over networks such as the Internet has grown at impressive rates. Many Internet and other online service providers make this type of content available to enable users to post and share such content through their services. However, various limitations exist with respect to how this vast amount of information can be effectively monitored and/or selectively displayed.
Because of the vast amount of information and different ways in which to communicate with users, it can be difficult to communicate with creators of the content.
The disclosure relates to systems and methods for augmenting a view of a physical space of one or more geographically definable locations (“geo-locations”) with social media and/or other content originating from the one or more geo-locations. Generally speaking, the system may include a computing device having one or more processors programmed to augment (e.g., add to, overlay, embed, etc.) the view of the physical space of a geo-location with social media content, thereby allowing a user to view a physical space at a geo-location along with social media content that was created from the geo-location.
The one or more processors may be programmed by one or more computer program modules. For example, the one or more processors may be configured to execute a geofeed creation module, a content context module, a reality context module, an augmented reality module, an administration module, a communication module, a user interface module, and/or other modules. The geofeed creation module may be configured to obtain the social media content from one or more social media content providers.
In some implementations, the content context module may be configured to obtain (e.g., receive, retrieve, or determine) contextual information that describes the context in which the social media content was created. The contextual information for the content may include a geo-location, an ambient condition (e.g., temperature), an altitude, a motion or orientation based on sensor measurements from a device used to create the content, and/or other information that describes the context in which the social media content was created. Content context module may be configured to obtain the contextual information from the content itself, such as when the contextual information is available as Exchangeable Image File (“EXIF”) data embedded in images, from the social media content provider, and/or from other sources (e.g., from a user who created the content).
The computer may be configured to determine social media content that is to augment the view of the physical space based on one or more of the contextual information. The social media content may be filtered in and/or out using various geofeed parameters (e.g., hashtags, identification of types of content, content providers, etc.) described herein. Thus, a user may indicate that certain content be included for and/or excluded from consideration for augmenting the view of the physical space.
In some implementations, the reality context module may be configured to obtain contextual information that describes the context of a view of a physical space. The view of the physical space may include an image being displayed in real-time through a camera lens (e.g., through a display that displays a scene being captured by imaging sensors of a camera), an image that is stored and displayed (e.g., a photograph), and/or other views of a physical space. Contextual information that describes the context of a view of a physical space may include information similar to contextual information that describes social media content. For example, the contextual information that describes the context of the view of the physical space may include a geo-location of the physical space (e.g., a current location for real-time implementations and a location at which the view was taken for stored implementations) and/or other contextual information.
Reality context module may be configured to obtain the reality contextual information from real-time measurements/information (e.g., location information from location sensors, temperature from temperature sensors, etc.). In some implementations, the reality context module may obtain the location based on image recognition of image features such as buildings, structures, and/or other identifiable objects taken from the view of the physical space.
In some implementations, the augmented reality module may compare one or more of the content contextual information from content context module and one or more of the reality contextual information from reality context module. The augmented reality module may determine a match (which may be exact or inexact) between the content contextual information and the reality contextual information from reality context module.
Upon determining a match, the augmented reality module may augment the view of the physical space of the geo-location. For example, a location at which the social media content was created may be compared to a geo-location of the physical space being viewed. The augmented reality module may determine that the social media content was created from the geo-location of the physical space being viewed and augment the view of the physical space with the social media content. Other contextual information may be used instead of or in addition to the location information to determine whether social media content should be used to augment the view of the physical space.
Some or all of the processing related to the content context module, the reality context module, and the augmented reality module may be performed at a device used to display the view of the physical space and/or at another device.
By way of example only, in operation, a user may look at a display of the user's mobile device that displays a view of a building and its surroundings at a geo-location, where the view is imaged with a camera of the user's mobile device (the view may be a picture/video or a live shot of the building and its surroundings). At the mobile device and/or at a remote device, reality contextual information that describes the geo-location may be obtained and social media content may be identified based on a comparison between content contextual information and the reality contextual information. For example, the mobile device and/or the remote device may identify social media content that was created from the geo-location of the building and its surroundings being viewed. The mobile device may then augment the view of the buildings and its surroundings with the identified social media content, thereby enhancing the user's view of the physical space.
These and other objects, features, and characteristics of the system and/or method disclosed herein, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. As used in the specification and in the claims, the singular form of “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise.
The geo-location may be specified by a boundary, geo coordinates (e.g., latitude, longitude, altitude/depth), an address, a school, a place name, a point of interest, a zip code, a city, a state, a country, and/or other information that can spatially identify an area. The content may be generated by content sources such as individuals, corporations, and/or other entities that may create content. As used hereinafter, “a location,” “a geo-location,” “a geographically definable location,” and similar language is not limited to a single location but may also refer to one or more such locations.
In many instances the content can be automatically tagged with information. The information may include a user identification, date/time information related to the content (e.g., a date and/or time that indicates when the content was created, uploaded, etc.), geographic information that specifies a location where the content was created, uploaded, etc., and/or other information. For example, cameras equipped with a Global Positioning Satellite (“GPS”) unit and/or other location-aware system may embed into an image file latitude/longitude coordinates that indicate where a picture was taken. In addition, modern hand-held devices such as smartphones may be equipped with a GPS sensor, which allows users to generate content with their devices and share the content through a plurality of social networks and other providers. Moreover, some devices allow users to manually input the foregoing and other information for embedding into the content. Furthermore, editing software may allow a user to embed or otherwise manually and/or automatically include information along with the content after the content was created.
System 100 may include a computer 110, a geofeed API 111, a content consumer device 130, provider APIs 140, content providers 150, and/or other components. In some embodiments, computer 110 may include one or more processors 120 configured to perform some or all of a functionality of a plurality of modules, which may be stored in a memory 121. For example, the one or more processors 120 may be configured to execute a geofeed creation module 112, a content context module 113, a reality context module 114, an augmented reality module 115, an administration module 116, a communication module 117, a user interface module 118, and/or other modules 119. Geofeed API 111 may be used to interface with computer 110 to interface with computer 110 in relation to the geofeeds.
Geofeed creation module 112 may be configured to create one or more geofeeds, as described in U.S. patent application Ser. No. 13/284,455 (issued on Feb. 18, 2014 as U.S. Pat. No. 8,655,873), filed Oct. 28, 2011, entitled “SYSTEM AND METHOD FOR AGGREGATING AND DISTRIBUTING GEOTAGGED CONTENT,” and U.S. patent application Ser. No. 13/619,888 (issued on Nov. 26, 2013 as U.S. Pat. No. 8,595,317), filed Sep. 14, 2012, entitled “SYSTEM AND METHOD FOR GENERATING, ACCESSING, AND UPDATING GEOFEEDS” both of which are incorporated by reference herein in their entireties.
U.S. patent application Ser. No. 13/708,516 (issued on Feb. 18, 2014 as U.S. Pat. No. 8,655,983), filed Dec. 7, 2012, entitled “SYSTEM AND METHOD FOR LOCATION MONITORING BASED ON ORGANIZED GEOFEEDS,” U.S. patent application Ser. No. 13/708,466 (issued on Jan. 28, 2014 as U.S. Pat. No. 8,639,767), filed Dec. 7, 2012, entitled “SYSTEM AND METHOD FOR GENERATING AND MANAGING GEOFEED-BASED ALERTS,” U.S. patent application Ser. No. 13/708,404 (issued on Jul. 9, 2013 as U.S. Pat. No. 8,484,224), filed Dec. 7, 2012, entitled “SYSTEM AND METHOD FOR RANKING GEOFEEDS AND CONTENT WITHIN GEOFEEDS,” U.S. patent application Ser. No. 13/788,843 (issued on Apr. 5, 2016 as U.S. Pat. No. 9,307,353), filed Mar. 7, 2013, entitled “SYSTEM AND METHOD FOR DIFFERENTIALLY PROCESSING A LOCATION INPUT FOR CONTENT PROVIDERS THAT USE DIFFERENT LOCATION INPUT FORMATS,” U.S. patent application Ser. No. 13/788,760 (issued on Dec. 17, 2013 as U.S. Pat. No. 8,612,533), filed Mar. 7, 2013, entitled “SYSTEM AND METHOD FOR CREATING AND MANAGING GEOFEEDS,” and U.S. patent application Ser. No. 13/788,909 (issued on Sep. 30, 2014 as U.S. Pat. No. 8,850,531), filed Mar. 7, 2013, entitled “SYSTEM AND METHOD FOR TARGETED MESSAGING, WORKFLOW MANAGEMENT, AND DIGITAL RIGHTS MANAGEMENT FOR GEOFEEDS,” are all incorporated by reference in their entireties herein. are all incorporated by reference in their entireties herein.
U.S. patent application Ser. No. 13/843,949 (issued on Oct. 14, 2014 as U.S. Pat. No. 8,862,589), filed on Mar. 15, 2013, entitled “SYSTEM AND METHOD FOR PREDICTING A GEOGRAPHIC ORIGIN OF CONTENT AND ACCURACY OF GEOTAGS RELATED TO CONTENT OBTAINED FROM SOCIAL MEDIA AND OTHER CONTENT PROVIDERS,” U.S. patent application Ser. No. 13/843,832 (issued on Sep. 30, 2014 as U.S. Pat. No. 8,849,935), filed on Mar. 15, 2013, entitled “SYSTEM AND METHOD FOR GENERATING THREE-DIMENSIONAL GEOFEEDS, ORIENTATION-BASED GEOFEEDS, AND GEOFEEDS BASED ON AMBIENT CONDITIONS,” are all incorporated by reference in their entireties herein.
Geofeed creation module 112 may be configured to generate one or more geofeeds based on content that is relevant to one or more geographically definable locations (“geo-locations”). The geofeed creation module may format requests that specify one or more geo-locations specifically for individual ones of the plurality of content providers and aggregate the content to form a geofeed. In some embodiments, geofeed creation module 112 may create a single geofeed having a plurality of geo-locations that are grouped with respect to one another. In other embodiments, geofeed creation module 112 may create multiple distinct geofeeds, which may each be associated with one or more geo-locations and may be grouped with respect to one another. In these embodiments, each set of individual content may correspond to a single geofeed.
For example, geofeed creation module 112 may format requests to individual ones of a plurality of APIs 140 (illustrated in
In some embodiments, geofeed creation module 112 may generate a geofeed definition that describes a geofeed such that a geofeed may be dynamically generated based on the geofeed definition. For example, the geofeed definition may include the geo-location specification, one or more geofeed parameters used to filter content aggregated from content providers 150, and/or other information related to the geofeed that can be used to aggregate content from various content providers. For example, the one or geofeed parameters may be used to view only particular types of content, content from particular content providers, and/or other parameter by which to filter in or out content. The geofeed definition may be identified by a geofeed identifier and stored (e.g., in database 136) for later retrieval so that a content consumer or others may select and obtain a geofeed that was previously defined.
In some embodiments, geofeed creation module 112 may store the geofeed (e.g., in database 136). For example, geofeed creation module 112 may be configured to store the geofeed by aggregating content from content providers 150 in relation to the geofeed and store the content in association with a geofeed identifier and/or a geofeed definition.
In some embodiments, geofeed creation module 112 may use the credentials of a user for social media or other platform to access content. In this manner, geofeed creation module 112 may obtain content from a content provider using the credentials of the user. For example, geofeed creation module 112 may obtain from the user a username and password (with permission from the user) for the user's TWITTER account and obtain content from TWITTER to which the user has access.
In some implementations, content context module 113 may be configured to obtain (e.g., receive, retrieve, or determine) contextual information that describes the context in which the social media content was created. The contextual information for the content may include a geo-location, an ambient condition (e.g., temperature), an altitude, a motion or orientation based on sensor measurements from a device used to create the content, and/or other information that describes the context in which the social media content was created. Content context module may be configured to obtain the contextual information from the content itself, such as when the contextual information is available as Exchangeable Image File (“EXIF”) data embedded in images, from the social media content provider, and/or from other sources (e.g., from a user who created the content).
The computer may be configured to determine social media content that is to augment the view of the physical space based on one or more of the contextual information. The social media content may be filtered in and/or out using various geofeed parameters (e.g., hashtags, identification of types of content, content providers, etc.) described herein. Thus, a user may indicate that certain content be included for and/or excluded from consideration for augmenting the view of the physical space.
In some implementations, reality context module 114 may be configured to obtain contextual information that describes the context of a view of a physical space. The view of the physical space may include an image being displayed in real-time through a camera lens (e.g., through a display that displays a scene being captured by imaging sensors of a camera), an image that is stored and displayed (e.g., a photograph), and/or other views of a physical space. Contextual information that describes the context of a view of a physical space may include information similar to contextual information that describes social media content. For example, the contextual information that describes the context of the view of the physical space may include a geo-location of the physical space (e.g., a current location for real-time implementations and a location at which the view was taken for stored implementations) and/or other contextual information.
Reality context module 114 may be configured to obtain the reality contextual information from real-time measurements/information (e.g., location information from location sensors, temperature from temperature sensors, etc.). In some implementations, reality context module 114 may obtain the location based on image recognition of image features such as buildings, structures, and/or other identifiable objects taken from the view of the physical space.
In some implementations, augmented reality module 115 may be configured to compare one or more of the content contextual information from content context module and one or more of the reality contextual information from reality context module. Augmented reality module 115 may determine a match (which may be exact or inexact) between the content contextual information and the reality contextual information from reality context module.
Upon determining a match, augmented reality module 115 may augment the view of the physical space of the geo-location. For example, a location at which the social media content was created may be compared to a geo-location of the physical space being viewed. Augmented reality module 115 may determine that the social media content was created from the geo-location of the physical space being viewed and augment the view of the physical space with the social media content. Other contextual information may be used instead of or in addition to the location information to determine whether social media content should be used to augment the view of the physical space.
In some implementations, for example, augmented reality module 115 may be configured to compare other contextual information and/or combinations of contextual information such as, for example, ambient conditions, orientations, motion (e.g., motion of a device such as a camera device used to create the content), altitude, and/or other contextual information.
Some or all of the processing related to content context module 113, reality context module 114, and augmented reality module 115 may be performed at a device used to display the view of the physical space and/or at another device.
In some embodiments, administration module 116 may be configured to manage user accounts, set user roles such as security access roles, and/or perform other administrative operations. For example, the administration module may be used to define which users may generate messages using the unified message module, generate workflow items, view workflow items of others, annotate content, enter into agreements with respect to ownership rights of the content, and/or set other user roles.
In some embodiments, communication module 117 may be configured to share a geofeed via a content provider such as a social media provider, email, SMS text, and/or other communication channels. In some embodiments, the communication module may be configured to communicate a geofeed via various feeds such as Really Simple Syndication (“RSS”) and ATOM feeds, a vanity Uniform Resource Locator (“URL”) using a name of the geofeed (e.g., a name assigned by the content consumer), and/or other communication channels.
In some embodiments, the user interface module 118 may be configured to generate user interfaces that allow viewing and interaction with augmented views of physical spaces. Examples of such user interfaces are illustrated in
Those having skill in the art will recognize that computer 110 and content consumer device 130 may each comprise one or more processors, one or more interfaces (to various peripheral devices or components), memory, one or more storage devices, and/or other components coupled via a bus. The memory may comprise random access memory (RAM), read only memory (ROM), or other memory. The memory may store computer-executable instructions to be executed by the processor as well as data that may be manipulated by the processor. The storage devices may comprise floppy disks, hard disks, optical disks, tapes, or other storage devices for storing computer-executable instructions and/or data.
One or more applications, including various modules, may be loaded into memory and run on an operating system of computer 110 and/or consumer device 130. In one implementation, computer 110 and consumer device 130 may each comprise a server device, a desktop computer, a laptop, a cell phone, a smart phone, a Personal Digital Assistant, a pocket PC, or other device.
Network 102 may include any one or more of, for instance, the Internet, an intranet, a PAN (Personal Area Network), a LAN (Local Area Network), a WAN (Wide Area Network), a SAN (Storage Area Network), a MAN (Metropolitan Area Network), a wireless network, a cellular communications network, a Public Switched Telephone Network, and/or other network.
Various inputs, outputs, configurations, and/or other information described herein as being stored or storable may be stored in one or more databases (not illustrated in
User interface 200A may display a view of a physical space 210 augmented by an augmented reality (“AR”) space 220. AR space 220 may be overlaid onto, embedded within, or otherwise displayed alongside physical space 210 (e.g., a real-world space) such that graphical objects displayed on AR space 220 coincides with locations on physical space 210. In this manner, graphical objects on AR space 220 may appear to be associated with physical objects (e.g., real-world objects) in physical space 210. Physical space 210 and AR space 220 are illustrated as being separate solely for convenience of illustration.
A physical object 212 and its surroundings may be presented in a view of the physical space. Reality context module 114 (illustrated in
For example, reality context module 114 may determine that physical object 212 is located at a particular geo-location. As described herein, the particular geo-location may be determined based on conventional location techniques associated with a device that is displaying user interface 200A. For example, the device may include GPS sensors and/or other devices that can be used for localization. In some implementations, physical object 212 and/or other feature of physical space 210 may be used to determine the particular geo-location such as by image recognition and comparison to a database of known objects, for example.
Whichever location technique is used, augmented reality module 115 may identify social media and/or other content that was created from the particular geo-location. Users may have posted social media content to one or more social media providers from the particular geo-location. Augmented reality module 115 may identify or otherwise obtain such social media content and provide user interface 200A with the content. For example, user interface module 118 may include AR objects 222 (illustrated in
In some implementations, if information that indicates that a particular social media content item was created from or nearby physical object 212, a corresponding AR object 222 may be positioned on AR space 220 to correspond to physical object, thereby providing the user with an indication of this.
User interface 200B may provide a view of physical space 210 augmented with AR space 220. Physical object 212 may be visible in the augmented view. In the illustrated implementation, user interface 200B may include an indicator 230 that indicates that social media content was created at the direction indicated. For example, reality context module 114 may determine an orientation of the device being used to display user interface 200B. Such orientation may be determined based on sensor information from gyroscopes, accelerometers, magnetometers, and/or other sensors.
Augmented reality module 115 may determine content created from a geo-location of the physical space and, for example, an orientation at which the content was created. Augmented reality module 115 may determine that social media content is nearby a user's location but was created in an orientation that is different from the orientation of the device that is displaying user interface 200B.
Indicator 230 may indicate the direction of where social media content was created. For example, indicator 230 may indicate that social media content was posted while the user who posted the content was in an orientation that is in a direction as indicated by the indicator. In other words, if the device that displays user interface 200B turns in the direction indicated by indicator 230, the social media content will be made visible in AR space 220. In this manner, a user may explore a given scenery to determine what previous users may have posted about the given scenery observed from the same perspective (e.g., orientation). For example, hobbyists such as stargazers may gain insight into what previous stargazers may have been observing from a particular vantage point and orientation (e.g., zenith, azimuth, etc.) toward the sky. Tourists may view what others have posted about a particular scenic view or attraction.
Augmented reality module 115 may determine content created from physical object 216 and, for example, at which altitude. Augmented reality module 115 may correlate the altitude at which the social media and/or other content was created with the altitude of the physical object 216. Based on the correlation, augmented reality module 115 may cause AR objects 224 corresponding to the social media and/or other content to be displayed at their respective altitudes on physical object 216. For example, hotel or commercial building owners may post social media content (which may include marketing or other branded materials) from their respective buildings at different floors. User interface 200C may be used to then view the building augmented with the social media posts such that a passerby or other interested users may image the building and obtain an augmented image. Other uses are contemplated as well. For example, a user may enter the building and travel to various floors and receive an augmented view of each floor based on social media content that was posted from that particular building and that particular floor. Furthermore, although illustrated as a building, physical object 216 may include other types of structures for which different altitudes may be estimated and/or traversed for augmented views of the physical object.
In an operation 302, a view of the particular scenery may be obtained (e.g., photographed, videographed, imaged live, etc.). In an operation 304, the imaged scenery may be processed. For example, one or more features in the scenery may be processed using conventional image processing techniques to recognize a location. A location may be recognized based the processing in an operation 306. For example, a landmark such as the Empire State Building may be recognized and a location for the landmark may be obtained.
In an operation 308, content created from the location may be obtained. In an operation 310, the content may be used to augment the view of the particular scenery with graphical elements that represent the content created from the location.
In an operation 402, a view of a physical space may be obtained (e.g., photographed, videographed, imaged live, etc.). In an operation 404, contextual information related to the physical space may be obtained. The reality contextual information may include a geo-location, an ambient condition, an altitude, and/or other reality contextual information may be obtained. For example, if the physical space has been imaged and stored as a photograph, the contextual information may be obtained from EXIF data or other data source that describes the image and/or the physical space. On the other hand, if the physical space is being currently imaged (e.g., live), then the reality contextual information may be obtained from one or more sensors on-board the device used to image the physical space, other sensors, inputs by an operator of the device, and/or other source of reality contextual information.
In an operation 406, contextual information that describes the content (e.g., social media content) may be obtained. In an operation 408, a determination of whether contextual information of the content matches the reality contextual information. Such matching may be exact or in-exact (e.g., within a predefined and/or configurable threshold) and may include matching location, orientation, ambient conditions, altitude, and/or other contextual information that can be automatically measured or determined. In some implementations, such matching may include matching information provided by users.
If a match is found, the view of the physical space may be augmented with graphical objects representative of the content whose contextual information matches the reality contextual information in an operation 410. Processing may then proceed to an operation 412, where a determination of whether more content is available for processing is made. If more content is available, processing may return to operation 406.
Other embodiments, uses and advantages of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. The specification should be considered exemplary only, and the scope of the invention is accordingly intended to be limited only by the following claims.
This application is a continuation of U.S. patent application Ser. No. 14/215,612, entitled “VIEW OF A PHYSICAL SPACE AUGMENTED WITH SOCIAL MEDIA CONTENT ORIGINATING FROM A GEO-LOCATION OF THE PHYSICAL SPACE,” filed Mar. 17, 2014, which claims priority to U.S. Provisional Patent Application No. 61/800,951, filed Mar. 15, 2013, the content of which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6026368 | Brown | Feb 2000 | A |
6363320 | Chou | Mar 2002 | B1 |
6591266 | Li | Jul 2003 | B1 |
7522940 | Jendbro | Apr 2009 | B2 |
7680796 | Yeh | Mar 2010 | B2 |
7698336 | Nath | Apr 2010 | B2 |
7912451 | Eckhart | Mar 2011 | B2 |
7974983 | Goeldi | Jul 2011 | B2 |
8103741 | Frazier | Jan 2012 | B2 |
8341223 | Patton | Dec 2012 | B1 |
8428228 | Baxter, Jr. | Apr 2013 | B1 |
8484224 | Harris | Jul 2013 | B1 |
8595317 | Harris | Nov 2013 | B1 |
8612533 | Harris | Dec 2013 | B1 |
8639767 | Harris | Jan 2014 | B1 |
8655873 | Mitchell | Feb 2014 | B2 |
8655983 | Harris | Feb 2014 | B1 |
8812951 | White | Aug 2014 | B1 |
8843515 | Burris | Sep 2014 | B2 |
8849935 | Harris | Sep 2014 | B1 |
8850531 | Harris | Sep 2014 | B1 |
8862589 | Harris | Oct 2014 | B2 |
8990346 | Harris | Mar 2015 | B2 |
9055074 | Harris | Jun 2015 | B2 |
9077675 | Harris | Jul 2015 | B2 |
9077782 | Harris | Jul 2015 | B2 |
9258373 | Harris | Feb 2016 | B2 |
9307353 | Harris | Apr 2016 | B2 |
9317600 | Harris | Apr 2016 | B2 |
9369533 | Harris | Jun 2016 | B2 |
9436690 | Harris | Sep 2016 | B2 |
9443090 | Harris | Sep 2016 | B2 |
9479557 | Harris | Oct 2016 | B2 |
9485318 | Harris | Nov 2016 | B1 |
9497275 | Harris | Nov 2016 | B2 |
20020029226 | Li | Mar 2002 | A1 |
20020029384 | Griggs | Mar 2002 | A1 |
20020116505 | Higgins | Aug 2002 | A1 |
20020128908 | Levin | Sep 2002 | A1 |
20020188669 | Levine | Dec 2002 | A1 |
20030018607 | Lennon | Jan 2003 | A1 |
20030025832 | Swart | Feb 2003 | A1 |
20030040971 | Freedenberg | Feb 2003 | A1 |
20030088609 | Guedalia | May 2003 | A1 |
20040203854 | Nowak | Oct 2004 | A1 |
20040205585 | McConnell | Oct 2004 | A1 |
20040225635 | Toyama | Nov 2004 | A1 |
20050034074 | Munson | Feb 2005 | A1 |
20060002317 | Venkata | Jan 2006 | A1 |
20060106778 | Baldwin | May 2006 | A1 |
20060184968 | Clayton | Aug 2006 | A1 |
20060200305 | Sheha | Sep 2006 | A1 |
20070043721 | Ghemawat | Feb 2007 | A1 |
20070112729 | Wiseman | May 2007 | A1 |
20070121843 | Atazky | May 2007 | A1 |
20070143345 | Jones | Jun 2007 | A1 |
20070210937 | Smith | Sep 2007 | A1 |
20070276919 | Buchmann | Nov 2007 | A1 |
20070294299 | Goldstein | Dec 2007 | A1 |
20080092054 | Bhumkar | Apr 2008 | A1 |
20080104019 | Nath | May 2008 | A1 |
20080125969 | Chen | May 2008 | A1 |
20080147674 | Nandiwada | Jun 2008 | A1 |
20080162540 | Parikh | Jul 2008 | A1 |
20080189099 | Friedman | Aug 2008 | A1 |
20080192934 | Nelger | Aug 2008 | A1 |
20080250031 | Ting | Oct 2008 | A1 |
20080294603 | Ranjan | Nov 2008 | A1 |
20090005968 | Vengroff | Jan 2009 | A1 |
20090102859 | Athsani | Apr 2009 | A1 |
20090132435 | Titus | May 2009 | A1 |
20090138497 | Zavoli | May 2009 | A1 |
20090210426 | Kulakov | Aug 2009 | A1 |
20090217232 | Beerel | Aug 2009 | A1 |
20090222482 | Klassen | Sep 2009 | A1 |
20090297118 | Fink | Dec 2009 | A1 |
20090300528 | Stambaugh | Dec 2009 | A1 |
20090327232 | Carter | Dec 2009 | A1 |
20100010907 | Dasgupta | Jan 2010 | A1 |
20100030648 | Manolescu | Feb 2010 | A1 |
20100076968 | Boyns | Mar 2010 | A1 |
20100079338 | Wooden | Apr 2010 | A1 |
20100083124 | Druzgalski | Apr 2010 | A1 |
20100145947 | Kolman | Jun 2010 | A1 |
20100149399 | Mukai | Jun 2010 | A1 |
20100153386 | Tysowski | Jun 2010 | A1 |
20100153410 | Jin | Jun 2010 | A1 |
20100174998 | Lazarus | Jul 2010 | A1 |
20100177120 | Balfour | Jul 2010 | A1 |
20100180001 | Hardt | Jul 2010 | A1 |
20110007941 | Chen | Jan 2011 | A1 |
20110010674 | Knize | Jan 2011 | A1 |
20110035284 | Moshfeghi | Feb 2011 | A1 |
20110040894 | Shrum | Feb 2011 | A1 |
20110055176 | Choi | Mar 2011 | A1 |
20110072106 | Hoffert | Mar 2011 | A1 |
20110072114 | Hoffert | Mar 2011 | A1 |
20110078584 | Winterstein | Mar 2011 | A1 |
20110083013 | Nice | Apr 2011 | A1 |
20110113096 | Long | May 2011 | A1 |
20110123066 | Chen | May 2011 | A9 |
20110131496 | Abram | Jun 2011 | A1 |
20110137561 | Kankainen | Jun 2011 | A1 |
20110142347 | Chen | Jun 2011 | A1 |
20110153368 | Pierre | Jun 2011 | A1 |
20110202544 | Carle | Aug 2011 | A1 |
20110227699 | Seth | Sep 2011 | A1 |
20110270940 | Johnson | Nov 2011 | A1 |
20110288917 | Wanek | Nov 2011 | A1 |
20110307307 | Benmbarek | Dec 2011 | A1 |
20120001938 | Sandberg | Jan 2012 | A1 |
20120047219 | Feng | Feb 2012 | A1 |
20120077521 | Boldyrev | Mar 2012 | A1 |
20120078503 | Dzubay | Mar 2012 | A1 |
20120084323 | Epshtein | Apr 2012 | A1 |
20120101880 | Alexander | Apr 2012 | A1 |
20120124161 | Tidwell | May 2012 | A1 |
20120150901 | Johnson | Jun 2012 | A1 |
20120158536 | Gratton | Jun 2012 | A1 |
20120166367 | Murdock | Jun 2012 | A1 |
20120212398 | Border | Aug 2012 | A1 |
20120221687 | Hunter | Aug 2012 | A1 |
20120232939 | Pierre | Sep 2012 | A1 |
20120233158 | Braginsky | Sep 2012 | A1 |
20120239763 | Musil | Sep 2012 | A1 |
20120254774 | Patton | Oct 2012 | A1 |
20120259791 | Zoidze | Oct 2012 | A1 |
20120276848 | Krattiger | Nov 2012 | A1 |
20120276918 | Krattiger | Nov 2012 | A1 |
20120323687 | Schuster | Dec 2012 | A1 |
20120330959 | Kretz | Dec 2012 | A1 |
20130013713 | Shoham | Jan 2013 | A1 |
20130018957 | Parnaby | Jan 2013 | A1 |
20130051611 | Hicks | Feb 2013 | A1 |
20130054672 | Stilling | Feb 2013 | A1 |
20130060796 | Gilg | Mar 2013 | A1 |
20130073388 | Heath | Mar 2013 | A1 |
20130073389 | Heath | Mar 2013 | A1 |
20130073631 | Patton | Mar 2013 | A1 |
20130110631 | Mitchell | May 2013 | A1 |
20130110641 | Ormont | May 2013 | A1 |
20130124437 | Pennacchiotti | May 2013 | A1 |
20130131918 | Hahne | May 2013 | A1 |
20130132194 | Rajaram | May 2013 | A1 |
20130150015 | Valk | Jun 2013 | A1 |
20130159463 | Bentley | Jun 2013 | A1 |
20130201182 | Kuroki | Aug 2013 | A1 |
20130238599 | Burris | Sep 2013 | A1 |
20130238652 | Burris | Sep 2013 | A1 |
20130238658 | Burris | Sep 2013 | A1 |
20130268558 | Burris | Oct 2013 | A1 |
20130290554 | Chen | Oct 2013 | A1 |
20130325964 | Berberat | Dec 2013 | A1 |
20130346563 | Huang | Dec 2013 | A1 |
20140025911 | Sims | Jan 2014 | A1 |
20140040371 | Gurevich | Feb 2014 | A1 |
20140089296 | Burris | Mar 2014 | A1 |
20140089343 | Burris | Mar 2014 | A1 |
20140089461 | Harris | Mar 2014 | A1 |
20140095509 | Patton | Apr 2014 | A1 |
20140164368 | Mitchell | Jun 2014 | A1 |
20140195918 | Friedlander | Jul 2014 | A1 |
20140207893 | Harris | Jul 2014 | A1 |
20140222950 | Rabel | Aug 2014 | A1 |
20140236882 | Rishe | Aug 2014 | A1 |
20140256355 | Harris | Sep 2014 | A1 |
20140258451 | Harris | Sep 2014 | A1 |
20140259113 | Harris | Sep 2014 | A1 |
20140274148 | Harris | Sep 2014 | A1 |
20140280103 | Harris | Sep 2014 | A1 |
20140280278 | Harris | Sep 2014 | A1 |
20140280569 | Harris | Sep 2014 | A1 |
20140297740 | Narayanan | Oct 2014 | A1 |
20150019648 | Harris | Jan 2015 | A1 |
20150019866 | Braness | Jan 2015 | A1 |
20150020208 | Harris | Jan 2015 | A1 |
20150032739 | Harris | Jan 2015 | A1 |
20150172396 | Longo | Jun 2015 | A1 |
20150256632 | Harris | Sep 2015 | A1 |
20150381380 | Harris | Dec 2015 | A1 |
20160006783 | Harris | Jan 2016 | A1 |
20160014219 | Harris | Jan 2016 | A1 |
20160182656 | Harris | Jun 2016 | A1 |
20160219403 | Harris | Jul 2016 | A1 |
20160283561 | Harris | Sep 2016 | A1 |
Number | Date | Country |
---|---|---|
1045345 | Oct 2000 | EP |
2187594 | May 2010 | EP |
2293566 | Mar 2011 | EP |
9915995 | Apr 1999 | WO |
2010049918 | May 2010 | WO |
2013133870 | Sep 2013 | WO |
2013134451 | Sep 2013 | WO |
Entry |
---|
Amitay et al., “Web-a-Where: Geotaqqinq Web Content”, Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2004, pp. 273-280. |
Bao, Jie, et al.. “GeoFeed: A Location-Aware News Feed System”, IEEE Xplore Digital Library, Published in 2012 IEEE 28th International Conference on Data Engineering, Apr. 1-5, 2012, 14 pages. |
Chow et al., “Towards Location-Based Social Networking Services”, LBSN 2010 Proceedings of the 2nd ACM SISPATIAL International Workshop on Location Based Social Networks, Nov. 2, 2010, pp. 31-38. |
Lee et al., “Tag-Geotag Correlation in Social Networks”, Proceedings of the 2008 ACM Workshop on Search in Social Media, 2008, pp. 59-66. |
Sarwat, Mohamed, et al., “Sindbad: A Location-Based Social Networking System”, SIGMOD '12, Scottsdale, Arizona, May 20-24, 2012, 4 pages. |
U.S. Appl. No. 13/788,843, a Notice of Allowance, mailed Dec. 3, 2015, 18 pages. |
U.S. Appl. No. 14/500,881, a non-final Office Action, mailed Sep. 21, 2015, 5 pages. |
U.S. Appl. No. 13/284,455, a non-final Office Action, mailed Jan. 7, 2013, 18 pages. |
U.S. Appl. No. 13/284,455, a non-final Office Action, mailed Jun. 4, 2013, 28 pages. |
U.S. Appl. No. 13/284,455, a Notice of Allowance, mailed Oct. 4, 2013, 17 pages. |
U.S. Appl. No. 13/619,888, a non-final Office Action, mailed Mar. 1, 2013, 15 pages. |
U.S. Appl. No. 13/619,888, a Notice of Allowance, mailed Jul. 9, 2013, 10 pages. |
U.S. Appl. No. 13/708,404, a Notice of Allowance, mailed May 24, 2013, 12 pages. |
U.S. Appl. No. 13/708,466, a non-final Office Action, mailed Apr. 17, 2013, 15 pages. |
U.S. Appl. No. 13/708,466, a Notice of Allowance, mailed Sep. 3, 2013, 11 pages. |
U.S. Appl. No. 13/708,516, a non-final Office Action, mailed May 15, 2013, 11 pages. |
U.S. Appl. No. 13/708,516, a Notice of Allowance, mailed Jun. 7, 2013, 14 pages. |
U.S. Appl. No. 13/788,760, a Notice of Allowance, mailed Jul. 26, 2013, 12 pages. |
U.S. Appl. No. 13/788,843, a final Office Action, mailed Jan. 21, 2014, 25 pages. |
U.S. Appl. No. 13/788,843, a non-final Office Action, mailed Aug. 5, 2013, 17 pages. |
U.S. Appl. No. 13/788,843, a non-final Office Action, mailed Feb. 20, 2015, 26 pages. |
U.S. Appl. No. 13/788,909, a non-final Office Action, mailed Aug. 12, 2013, 17 pages. |
U.S. Appl. No. 13/788,909, a Notice of Allowance, mailed Jan. 24, 2014, 12 pages. |
U.S. Appl. No. 13/788,909, a Notice of Allowance, mailed Jun. 24, 2014, 11 pages. |
U.S. Appl. No. 13/843,832, a non-final Office Action, mailed Sep. 13, 2013, 12 pages. |
U.S. Appl. No. 13/843,832, a Notice of Allowance, mailed Jan. 24, 2014, 6 pages. |
U.S. Appl. No. 13/843,832, a Notice of Allowance, mailed May 20, 2014, 7 pages. |
U.S. Appl. No. 13/843,949, a non-final Office Action, mailed Aug. 29, 2013, 12 pages. |
U.S. Appl. No. 13/843,949, a Notice of Allowance, mailed Feb. 3, 2014, 11 pages. |
U.S. Appl. No. 13/843,949, a Notice of Allowance, mailed May 9, 2014, 10 pages. |
U.S. Appl. No. 14/089,631, a final Office Action, mailed Jan. 2, 2015, 8 pages. |
U.S. Appl. No. 14/089,631, a non-final Office Action, mailed Jul. 8, 2014, 21 pages. |
U.S. Appl. No. 14/089,631, a Notice of Allowance, mailed Feb. 2, 2015, 10 pages. |
U.S. Appl. No. 14/108,301, a non-final Office Action, mailed Sep. 11, 2014, 10 pages. |
U.S. Appl. No. 14/108,301, a Notice of Allowance, mailed Feb. 20, 2015, 13 pages. |
U.S. Appl. No. 14/164,362, a non-final Office Action, mailed Oct. 23, 2014, 15 pages. |
U.S. Appl. No. 14/164,362, a Notice of Allowance, mailed Feb. 24, 2015, 22 pages. |
U.S. Appl. No. 14/180,473, a final Office Action, mailed Jan. 5, 2015, 7 pages. |
U.S. Appl. No. 14/180,473, a non-final Office Action, mailed Jul. 8, 2014, 18 pages. |
U.S. Appl. No. 14/180,473, a Notice of Allowance, mailed Jan. 27, 2015, 8 pages. |
U.S. Appl. No. 14/180,845, a final Office Action, mailed Feb. 22, 2016, 43 pages. |
U.S. Appl. No. 14/180,845, a final Office Action, mailed Feb. 25, 2015, 32 pages. |
U.S. Appl. No. 14/180,845, a non-final Office Action, mailed Aug. 27, 2015, 43 pages. |
U.S. Appl. No. 14/180,845, a non-final Office Action, mailed Oct. 23, 2014, 32 pages. |
U.S. Appl. No. 14/215,612, a final Office Action, mailed Nov. 28, 2014, 31 pages. |
U.S. Appl. No. 14/215,612, a non-final Office Action, mailed Jul. 11, 2014, 16 pages. |
U.S. Appl. No. 14/215,612, a non-final Office Action, mailed Aug. 18, 2015, 27 pages. |
U.S. Appl. No. 14/500,832, a non-final Office Action, mailed May 21, 2015, 13 pages. |
U.S. Appl. No. 14/500,881, a non-final Office Action, mailed Dec. 21, 2015, 24 pages. |
U.S. Appl. No. 14/512,293, a Final Office Action, mailed Apr. 6, 2016, 9 pages. |
U.S. Appl. No. 14/512,293, a final Office Action, mailed Aug. 14, 2015, 15 pages. |
U.S. Appl. No. 14/512,293, a non-final Office Action, mailed Dec. 9, 2015, 14 pages. |
U.S. Appl. No. 14/512,293, a non-final Office Action, mailed Jan. 28, 2015, 18 pages. |
U.S. Appl. No. 14/666,056, a Final Office Action, mailed Jan. 4, 2016, 11 pages. |
U.S. Appl. No. 14/666,056, a non-final Office Action, mailed Aug. 10, 2015, 17 pages. |
U.S. Appl. No. 14/733,715, a non-final Office Action, mailed Mar. 11, 2016, 25 pages. |
U.S. Appl. No. 14/792,538, a non-final Office Action, mailed Feb. 26, 2016, 20 pages. |
U.S. Appl. No. 14/813,031, a final Office Action, mailed Mar. 21, 2016, 41 pages. |
U.S. Appl. No. 14/813,031, a non-final Office Action, mailed Nov. 24, 2015, 23 pages. |
U.S. Appl. No. 14/813,039, a Final Office Action, mailed May 16, 2016, 14 pages. |
U.S. Appl. No. 14/813,039, a non-final Office Action, mailed Jan. 20, 2016, 20 pages. |
U.S. Appl. No. 14/180,845, a non-final Office Action, mailed Jul. 7, 2016, 51 pages. |
U.S. Appl. No. 14/733,715, a Final Office Action, mailed Aug. 17, 2016, 21 pages. |
U.S. Appl. No. 15/018,767, a non-final Office Action, mailed Jun. 6, 2016, 19 pages. |
U.S. Appl. No. 15/241,836, a non-final Office Action, mailed Oct. 7, 2016, 38 pages. |
Number | Date | Country | |
---|---|---|---|
20160232182 A1 | Aug 2016 | US |
Number | Date | Country | |
---|---|---|---|
61800951 | Mar 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14215612 | Mar 2014 | US |
Child | 15130289 | US |